USA - A United States for AI
The White House's Fact Sheet addressing AI safety for US citizens
As soon as I hit publish on the post titled The EU AI Act: A Promising Step or a Risky Gamble for the Future of AI? on Creative Block- where I discussed the EU AI Act - I wondered what was US doing to deal with AI safety.
The USA is not staring at uncertainty. It can’t afford to. It is aiming to take a stance on the safety of AI.
And since I covered about EU’s regulatory stance on AI, here is my coverage on what the US is doing -
I.
With great power… comes greater accountability
Europe was the first international union to draft an AI act.
However, look around and you will notice that the most powerful AI companies - the likes of OpenAI, Meta, Microsoft, and Anthropic - are all US-based. They hold the most influence in the AI scene. OpenAI, the company that developed ChatGPT, single-handedly spread the fire of Generative AI to the world - raising the stakes and expectations of AI globally.
In that sense, if we want to make safer AI for the world, a good starting point would be to regulate the AI companies based in the USA - a country which is home to the global leaders of AI Tech and development.
And good news is, the regulatory action is underway.
On May 4th, 2023, the Biden-Harris administration announced US’s AI Bill, which highlights the rights and protections that the act will give to its citizens from the hazards of AI.
According to this bill, AI tech giants - mostly situated in the US - agreed to put their models through independent security and vulnerability testing, before they go live to the public.
According to the statement published by White House, the largest and most influential AI companies - including Anthropic, Inflection, Meta, OpenAI and Microsoft (the ‘AI MOM’) - swear that they would set up an independent institution to assess their models and make it - well, nearly safe if not fully safe.
The agreement also included adding safeguards, such as testing the outputs of the AI models, detecting and correcting bias or misinformation, making the AI handle sensitive tasks, and so on.
Even watermarking the AI-generated content was one of the approaches mentioned in the bill.
The White House quoted in a statement:
“The Biden-Harris Administration is developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.”,Moreover, the statement also included that the Administration has “secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology."
The safe, secure, and transparent development of AI is easier said than done. But nonetheless, a foundational AI bill is a good start.
Here is the detailed official blueprint of the AI bills of rights from the Whitehouse, for your own reference -
Taking straight from the Blueprints for the AI bill, here are some of the key points of the AI act that you need to know-
i. Ensuring [the AI] Products are Safe Before Introducing Them to the Public
Companies have to check their AI systems for safety and security before they are released to the vast public.
Or in other words, taming the wild A.I. and making it more socially safe for the users.
For this, the companies would have to carry out the test by independent experts, those who are not directly affiliated with the company. This is to ensure that the development of AI happens in a safe and transparent manner.
Moreover, the companies must also commit themselves to sharing what they learn about AI safety and security with other companies, governments, and other groups. They would have to release the info and inform others on how to make AI safer, what kinds of attacks they face, and how to work together on technical issues.
ii. Building Systems that Put Security First
Security and privacy are the primary focus of AI safety and ethics. They are the crucial building blocks of a safe - and more importantly, responsible - AI.
For this, the companies would have to invest in protecting their AI systems from hackers and bad actors.
The significance of the model weight - dubbed as the ‘brain’ of generative AI - must also be emphasised. The companies must release their models (and AI apps) only when needed after the security risks are considered.
Also, if you face any issues with the AI model, and if at some point you want to file a harassment case against these AI machines, there should be institutions set up where you can approach and file your grievances.
For such reason, these companies also commit to making such institutions for such AI systems.
AI is far from perfect, and there is always a margin for error.
iii. Earning the Public’s Trust
Probably the most important points. Straight-lining the AI to earn the public trust.
AI is virtually of no use if it is not used to solve the problems of humanity. And in order to help humans, these machines with artificial life must earn the trust of living humans.
According to the AI bill, there are four ways to earn the public trust:
First, companies must develop a mechanism to differentiate between AI-generated and non-AI-generated text. One such way, as highlighted in the blueprint of the AI bill, is to use watermarking.
This is necessary to preserve the credibility of the original work and prevent fraud.
Second, companies must report their system capabilities, risks, possible misuse, use of applications, etc. The public must be informed about the risks associated with their AI system and should inform of new features and fixes of past issues.
Third, companies must prioritise research on the risk and safety implications of AI. It should continue to make its system better and safer for the general public.
While AI cannot be fully perfect, the risks can be mitigated. This could be done by placing restrictions, implementing tough regulations, and so on.
Fourth, companies must develop and deploy their tools to solve society’s greatest challenges.
Taking straight from the blueprint:
From cancer prevention to … climate change to so much in between, AI—if properly managed—can contribute to prosperity, equality, and security of all.
II.
The Five Protections of the US AI Bill of Rights
This came in when the echoes of the new AI Bill of Rights were heard throughout the white house. The Biden administration outlines that the new AI act would prove the following 5 protections to the citizens:
Safe and Effective Systems: protecting users from harmful AI systems (or the algorithmic weapons on mass misinformation)
Algorithmic Discrimination Protection: ensuring that users are not discriminated (based on nationality, ethnicity, belief, etc.) against by these AI systems
Data Privacy: having the ability to know how your data is being used, and protecting you from malpractices while training an AI
Notice and Explanation: The users need to be informed about the tools they are using, how their data are being used, what are the potential drawbacks, and so on.
Human Alternatives, Consideration, and Fallback: ensuring that you can have the power to opt out of service, and approach a person/agency to receive your complaints and feedback.
The underlying principles are pretty good. However, for all these protections to work, great implementation is needed. If not, then one must need better and tougher regulations.
Needless to say, the AI bill unveiled by Biden Administration received mixed responses.
Critics say that the AI plan is lacking strict measures, and further innovations in AI could backtrack.
The Wall Street Journal noted that this regulation could stifle AI innovation (which was the same response towards the new EU AI bill).
Russell Wald, director of policy for the Stanford Institute for Human-Centered AI, says:
“It is disheartening to see the lack of coherent federal policy to tackle … challenges posed by AI.”
Wald noted that the bill lacked initiatives such as federally coordinated monitoring and reviewing actions to mitigate the risks bought by these models.
On contrast, Shaundra Watson - policy director for AI at the tech lobby BSA, which includes members of Microsoft and IBM - says this:
“It will be important to ensure that these principles are applied in a manner that increases protections and reliability in practice.”
According to Watson (and I share the same opinion) the laws are good in principle, but their effectiveness will depend on how well they are implemented.
On the flip side, Marc Rotenberg, the head of the nonprofit Center for AI and Digital Policy, holds different optimistic view:
“This is clearly a starting point. That doesn't end the discussion over how the US implements human-centric and trustworthy AI ... But it is a very good starting point ”
Implementing AI safety laws federally or regulating AI, in general, is quite a new thing. It remains a challenge for any nation - as there are various magnitudes of multitudes of ways the errors could creep one. It is tough to predict unanticipated errors or bugs.
It's just that we know AI can most mess up at some point. But we are still largely unknown to the fact of how will it mess up.
However, quoting straight from Rotenberg: this is a start (nonetheless).
The USA is probably the only country that houses the most powerful AI companies, and yet they don’t have strict laws and safety implementation governing AI till now.1
III.
The Future of AI Regulations
The pace of development is fast in the AI scene. However, the striving efforts to implement safeguards are lacking. To put it in better words, the rate of improvements in safe AI is falling way behind the curve of the rate of development of AI in general.
Seven major companies that build powerful AI software, namely - Anthropic, Inflection, Meta, OpenAI and Microsoft (what I call, AI MOM) - have already signed on the bill pledging their commitment to making safe AI.
US house major AI companies that not only includes generative AI companies (OpenAI, Inflection, etc.) but other subfields and niches of AI - the likes of self-driving car, automation, etc.
We are back again, aren’t we? What is the future of AI now?
If we look at the funding of AI ‘startups’ alone of these US AI companies, we can observe the economy of these companies.
AI startups are burning money to keep up with the hype. For example, a France based AI startup company managed to raise funding of $113 million. The catch: the company didn’t have any prototype, nor any product when this funding was raised. So why did investors pour in so much money? Because of the hype and the attractive personalities working in this startup - which was founded by former DeepMind researchers Timothée Lacroix and Guillaume Lample.
A few months ago, I wrote an essay on AI hype on how companies exploit the Hype and earn continued profits. -
In this fast-changing time, whenever we ask what is the future of AI, the gravity of the answer seems to change from time to time.
Today if we made breakthroughs, the AI innovation of the future won’t take much time to become the AI innovation of the present.
Just like the doomsday clock, whenever new discoveries happen in AI, it brings us closer to AI catastrophe.
If AI continues to develop at a faster pace than AI safety, the AI catastrophe clock would be similar to the doomsday clock at present: 90 seconds closer.
AI regulation’s greatest goal is to rewind the clock of AI catastrophe, such that the clock is nowhere near 90 seconds till AI doom..
I feel that the EU’s Ai bill was more comprehensive, in spite of having certain loopholes.