GPT-3 is not "High Risk", according to EU AI Act
OpenAI reportedly lobbied EU to act in its favor.
OpenAI CEO Sam Altman was doing a worldwide trip, holding meetings with developers and world leaders to convince them about the need for AI regulations and safety.
On the flip side, a Times Exclusive Article showed up that OpenAI allegedly has lobbied the EU’s AI Act to make rules in their favour.
“A year spent in artificial intelligence is enough to make one believe in God.”
These were the words of the famous computer scientist Alan Perlis, who had his name in the Hall of Fame for the development of computer programming.
It's been seven months - since ChatGPT got released. Five years - since the first GPT mode GPT-1 was released. And seventy-four years since neural networks were released, unleashing the advanced Artificial intelligence.
But it was after the release of ChatGPT, back in 2022, that led to a huge surge of Generative AI. A surge that was most visible to the public, regardless they were AI experts or not. The progress was rapid and fast-paced, and there was no looking back. I have been watching the AI trend since November last month, and the progress in AI, even in that short span of 6 months was massive.
However, the fast-paced development in AI is not able to catch up with the other crucial element of the AI ecosystem - AI Ethics.
OpenAI is one organisation that acted as the compass in steering the AI development. It had the most impact on the AI front, in terms of doing innovative research (which is disruptive at the same time) and releasing AI-powered products such as ChatGPT, Dalle, etc. to the vast public.
People are now more aware of the power of AI - thanks to products like ChatGPT - as they are now able to interact with one another and see for themselves.
But for many months, OpenAI for long struggled with forming a regulated and safe AI.
It tried different ways to prevent hallucinations, misinformation and bias. But those elements still creep up in the latest GPT model, GPT4, as well.
While it is hard to fine-tune a model to give accurate results every time, it is still relatively easy to put restrictions and regulations on AI.
OpenAI had its own set of principles and ethics for AI, but it's only recently that the AI community is courting universal regulating bodies for AI. Many AI experts, including OpenAI itself, want to create a universal governing body.
The organisation put forward their own points in the drafting of the EU AI Act, which is presently the largest comprehensive AI Act ever created, according to the Times article.
But in the Times article linked above, we found a revelation that openAI is trying to lobby its power over the EU Drafts.
This was done unanimously, unannounced and in silence.
Here is the OpenAI WhitePaper for your own reference.
Here are the few major takeaways from the overall act, as highlighted by the MIT Technology Review
Ban on emotion-recognition AI.
Ban on real-time biometrics and predictive policing in public spaces.
Ban on social scoring.
New restrictions for generative AI.
New restrictions for recommendation algorithms on social media.
But the real question..
..Why this is a lobbying effort by OpenAI?
This OpenAI White Paper was submitted to the EU in September 2022. However, this was done relatively unannounced to the public. This is strange for an 'open ' non-profit organisation to do so, and because they are taking part in regulating the giant algorithmic monster, the public should be aware of such regulation right from the drafting phase itself.
“We believe people around the world should democratically decide on the bounds and defaults for AI systems.” - OpenAI said this on a blog a month ago, and what’s interesting to me is the fact that the whitepaper for these drafts which we are discussing wasn’t released - or even announced - to the public.
Billy Perrigo, the author of the Times Article, highlighted that OpenAI tried to lobby the EU to draft the act in their favour. And they did succeed.
This art of lobbying lies in certain wording.
The AI Act passed by EU lawmakers in its final version didn't include certain phrases that implied AI systems (ChatGPT or Midjourney likewise) are inherently very risky. Instead, the law focuses on "foundation models", which are AI systems trained on large sets of data. Examples of such models are GPT-4, LLaMa, and so on, which power any model Generative AI App.
These models, according to act, have to follow a few rules, such as not generating illegal content, disclosing if copyrighted material was used in training, and conducting risk assessments.
In a part of the White Paper, OpenAI opposed a suggested change to the AI Act that would have labelled generative AI systems like ChatGPT and Dall-E as "high risk" if they produced text or images that could “falsely appear to a person to be human-generated and authentic.”
OpenAI said in the White Paper that this rule could mean their models could “inadvertently” be considered high risk. What this means is that OpenAI implies that their models might become high risk ‘by mistake’. OpenAI recommended scrapping the amendment.
The company further argued that it would be sufficient to rely on another part of the same Act, which mandates AI providers and developers to label AI-generated content so that it can be clear to the user that they are interacting with an AI system.
This sounds a bit hypocritical to me. OpenAI, being a non-profit company, is trying its best to help draft regulations for AI systems, and at the same time protect its own product from these regulations.
OpenAI had shown its hypocritical nature, like this one, on many occasions in the past. For instance, remember the time when it became a closed AI when it formed a highly valued (10B +) partnership with Microsoft for the exclusive use of GPT-4.
When an organisation tries to make an impact on people, it is revolutionary. But when it is the only player in the market, it is a monopoly.
Regulations and Governing are the last resort to make AI safe. People have tried to fine-tune AI and make it less inaccurate, by using many strategies and mechanisms. But still, we are very far, even for achieving the ‘almost perfect’ AI.
Thus, we need Universal governing and regulations. And not just any superficial regulations, but an effective one that put proper restrictions and enforces safeguards - without favouring any 'open' or closed organisation. And that can actually make AI safer for us to use.
EU regulation is a good start, and it is far from perfect. But as time goes by, a universal fair and proper governing would be necessary. And this should be done before it's too late.
Lastly, even if you want to spend a year in artificial intelligence, and even if it is enough to make one believe in God according to Perlis, we must make sure that this god is not a devil in disguise - before it's too late.