An open letter, signed by over 1,100 artificial intelligence experts, CEOs, and researchers — including SpaceX CEO Elon Musk — is calling for a six-month moratorium on “AI experiments” that take the technology beyond a point where it’s more powerful than OpenAI’s recently released GPT-4 large language model.
It’s a notable expression of concern by a veritable who’s who of some of the most clued-in minds working in the field of AI today, including researchers from Alphabet’s DeepMind and the “godfather of AI” Yoshua Bengio.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” reads the letter, issued by the nonprofit organization Future of Life, adding that “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
“Unfortunately, this level of planning and management is not happening,” the letter continues, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
The letter questions whether we should allow AI to flood the internet with “propaganda and untruth” and take jobs away from humans.
It also references OpenAI CEO Sam Altman’s recent comments about artificial general intelligence, in which he argued that the company will use AGI to “benefit all of humanity,” sentiments that were immediately slammed by experts.
The six-month pause the experts call for should be used to “develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” they write.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter reads.
That kind of concern is echoed by governments around the world, which are struggling to get ahead of the problem and address the regulation of AI in a meaningful way. Last year, US president Joe Biden released a draft of an AI bill of rights, which would allow citizens to opt out of AI algorithms making decisions, but experts criticized the proposal for being toothless.
And it’s not just governments. Musk, who helped found OpenAI in 2015 before leaving over ideological differences three years later, has repeatedly voiced concern over overly powerful AI.
“AI stresses me out,” the billionaire told Tesla investors earlier this month, clarifying later that he’s a “little worried” about it.
“We need some kind of, like, regulatory authority or something overseeing AI development,” Musk added at the time. “Make sure it’s operating in the public interest. It’s quite dangerous technology. I fear I may have done some things to accelerate it.”
With OpenAI, a company that transformed from a non-profit to a for-profit after Musk left, it’s not a stretch to see the potential dangers of a profit-driven model of AI development.
Whether the company is acting in good faith or hunting multibillion-dollar deals with the likes of Microsoft to maximize profits remains as murky as ever.
And the danger of a runaway AI that does more harm than good is more prescient than one might think. The current crop of AI models, such as GPT-4, still have a worrying tendency to hallucinate facts and potentially mislead users, an aspect of the technology that clearly has experts spooked.
More on AI: Levi’s Mocked for Using AI to Generate “Diverse” Denim Models