ChatGPT-maker OpenAI cites 'existential threat' in call for AI regulation
Asserting that an International Atomic Energy Agency equivalent would be required to safeguard humanity from the risks posed by fast-developing Artificial Intelligence, the makers of ChatGPT- OpenAI, have called for the regulation of 'superintelligent' AIs.
Co-founders Greg Brockman, Ilya Sutskever and the chief executive, Sam Altman, in a note published on the company's website, asked for an international regulator to begin working on the ways to “inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the note read.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future, but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
OpenAI leaders called for 'some degree of coordination' among the organisations working on artificial intelligence research in order to make sure that the development of AI models commixes steadily with society while also prioritising safety.
The US-based Center for AI Safety (CAIS), which works to 'reduce societal-scale risks from artificial intelligence', describes eight categories of 'catastrophic' and 'existential' risks that AI development could pose.
As per the creators of viral chatbot ChatGPT, those risks mean "people around the world should democratically decide on the bounds and defaults for AI systems”, but admit that “we don’t yet know how to design such a mechanism”.
“We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity),” the note read.
“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
OpenAI CEO testifies before US Congress, calls regulation of artificial intelligence ‘critical’
The chief executive of OpenAI, the startup which created ChatGPT, Sam Altman, addressed a panel of United States lawmakers, on Tuesday (May 16), and said that regulation of the “increasingly powerful models” of artificial intelligence is “critical” to mitigate the risks the technology poses. Altman also spoke about how the use of AI to interfere with election integrity is a “significant area of concern”.
“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” said Altman while addressing a Senate judiciary subcommittee hearing.
This comes as many companies across the board from companies large and small have raced to bring increasingly sophisticated models of AI to market, which has raised concerns among critics and industry experts who have warned about how the technology can exacerbate societal harms including factors like misinformation and prejudice.
While expressing concerns Altman also spoke about how AI is also beneficial to society and said that in time, generative AI developed by OpenAI will “address some of humanity’s biggest challenges, like climate change and curing cancer.”
However, given the risks of the technology, “we think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” said the CEO of OpenAI.