scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
ChatGPT maker Sam Altman says superintelligent AI should be regulated like nuclear energy

ChatGPT maker Sam Altman says superintelligent AI should be regulated like nuclear energy

Altman compares the governance of superintelligence to historical examples like nuclear energy and synthetic biology, which required special treatment and coordination due to their potential risks

Sam Altman says AI should be governed like nuclear energy Sam Altman says AI should be governed like nuclear energy

OpenAI CEO Sam Altman authored an article along with two other experts offering a detailed plan for 'the governance of superintelligence', which refers to AI systems such as ChatGPT, Google Bard, and more. Altman explains that in the next ten years, AI systems may surpass human expertise in many areas and be as productive as large corporations. Superintelligence has the potential for both great benefits and risks. It could create a more prosperous future, but there are also dangers that need to be managed.

Altman compares the governance of superintelligence to historical examples like nuclear energy and synthetic biology, which required special treatment and coordination due to their potential risks. 

He suggests three important ideas for successfully navigating the development of superintelligence:

  • Coordination: There needs to be coordination among the leading AI development efforts to ensure safety and smooth integration with society. This could involve governments setting up a project or agreeing on limits for the growth of AI capability.
  • International authority: Above a certain level of capability, AI projects should be subject to an international authority, similar to the International Atomic Energy Agency (IAEA) for nuclear energy. This authority would inspect systems, enforce safety standards, and place restrictions on deployment and security.
  • Safety research: There is a need for technical research to make superintelligence safe. This is an ongoing area of study for OpenAI and others.

Altman clarifies that regulation should not stifle the development of AI models below a certain capability threshold. Companies and open-source projects should have the freedom to develop such models without burdensome regulation.

However, governance of the most powerful AI systems should involve strong public oversight. Decisions about their deployment and limitations should be democratically made by people worldwide. The exact mechanism for public input is yet to be designed, but OpenAI plans to experiment with its development.

Also read: ‘I'm nervous about it': OpenAI chief Sam Altman concerned about AI being used to compromise elections

Altman and other authors of the OpenAI blog conclude by explaining why OpenAI is building this technology despite the risks involved. They believe it will lead to a much better world, solving problems and improving societies. Additionally, stopping the development of superintelligence would be extremely difficult, and the potential benefits are too significant to ignore. Therefore, it is crucial to approach its development with great care.  

The OpenAI blog was co-authored by Sam Altman, Greg Brockman and Ilya Sutskever.

Also read: 'How many hours does Sam Altman sit in traffic?' ChatGPT-maker responds to criticism for his push to end remote work

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: May 23, 2023, 8:03 AM IST
IN THIS STORY
×
Advertisement