What OpenAI thinks about AI Governing and Regulation [Short Post]
Highlights from "Governance of superintelligence"
Source: Bing Image Creator
On 22nd May, OpenAI posted a blog titled “Governance of superintelligence”. It was published very recently. I feel this post is important for the public - as it comes from the same organisation that built GPT4. Safe to assume, OpenAI has a key role in driving the AI.
In this post, I quickly highlight the main points in the OpenAI blog that you must know.
“Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”
says OpenAI in the blog
OpenAI, this month, posted a blog titled “Governance of superintelligence”. The blog, written by CEO Sam Altman, CTO Greg Brockman and Chief Scientist Ilya Sutskever, mentions how to regulate and govern Artificial Intelligence (AI).
As we all know, AI getting more powerful over the past few months. Many AI-powered apps (ChatGPT, Bard, Dalle, etc.) got built that are pushing the frontiers of AI development.
The progress in AI is fast-paced. To ensure safety and avoid any existential crisis, proper regulations and governing of AI systems is necessary.
This is true, and essential, for both in the short and long term.
Here are some points covered in the OpenAI's blog that you must know -
How to go about Governing
OpenAI propose two way to ensure the safe development of AI :-
The first way, is to devise a way to coordinate the development of hyperintelligent AI. This coordination can be achieved by putting safeguards, restrictions or policies.
Appropriate coordination amongst various players and drivers of AI development must exist. Ensuring the safety and smooth integration of AI systems with society is crucial.
Governments, for example, can help by setting up projects that enforces this coordination.
One way, as highlighted in the blog, could be to make policies that ensure the rate of growth in AI is limited to a certain rate per year.
Individual companies, meanwhile, should follow ethical standards and guidelines while developing AI. It must be ensured that they are acting responsibly while using AI systems.
The second way, is to create an institution that has an international authority to inspect the developing AI systems. They must, like any international regulatory body, have the authority to
Audit the AI system that is being developed
Ensure that leading corporate follow the safety standards,
Placing restrictions on the rate of advancement, and
(importantly) Enforcing such development under high levels of security.
The Role of Public
OpenAI in its blog further mentions that the public has an imminent role in regulating and governing AI. AI development should be done in a “democratic” manner - where the people have the power to decide the capability of AI that should be acceptable.
A mechanism is thus needed, which allows people to decide and set rules over AI and its development. This ensures that while AI gets better and better, day by day, it does not compromise safety.
OpenAI says it is planning to make such a mechanism in future. “We don't yet know how to design such a mechanism, but we plan to experiment with its development,” OpenAI mentions in the post.
💭 What are your views on this topic? Let me know in the comments
Very informative