State Department Report Warns of AI Apocalypse, Suggests Limiting Compute Power Allowed for Training

A report commissioned by the US State Department is warning that rapidly evolving AI could pose a “catastrophic” risk to national security and even all of humanity.

The document titled “An Action Plan to Increase the Safety and Security of Advanced AI,” first reported on by TIME, advised that the US government must move “quickly and decisively” — with measures including potentially limiting the compute power allocated to training these AIs — or else risk an “extinction-level threat to the human species.”

“The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report reads.

While we have yet to reach the stage at which AI models can compete with humans on an intellectual level, commonly known as AGI, many have argued that it’s only a matter of time — and we should get ahead of the problem by having the government intervene before it’s too late.

It’s only the latest instance of experts warning of AI tech posing an “existential risk” to humanity, including Meta’s chief AI scientist Yann LeCun and a so-called “godfather” of the tech, Google’s head of AI in the United Kingdom Demis Hassabis, and ex-Google CEO Eric Schmidt.

A recent survey also found that over half of the AI researchers surveyed say there’s a five percent chance of humans will be driven to extinction, among other “extremely bad outcomes.”

The 247-page State Department report, commissioned in late 2022, involved speaking with more than 200 experts, including employees of companies like OpenAI, Meta, and Google DeepMind, and government workers.

To stop an AI from leading to our demise as a species, the authors recommend that a US agency should set the upper limit of how much computing power should be used to train any given AI model. AI companies should also seek the permission of the government to train any new model above a certain threshold.

Interestingly, the report also advises making it a criminal offense to open-source or reveal the inner workings of powerful AI models.

These recommendations are meant to address the risk of having an AI lab “lose control” of its AI systems, which could have “potentially devastating consequences to global security.”

“AI is already an economically transformative technology,” one of the report’s authors and CEO of Gladstone AI Jeremie Harris, told CNN. “It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable.”

“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” he added. “And a growing body of evidence — including empirical research and analysis published in the world’s top AI conferences — suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

Harris argued in a video posted on Gladstone AI’s website that current safety and security measures are woefully “inadequate relative to the national security risks that AI may introduce fairly soon.”

It’s far from the first time we’ve heard industry leaders warn about the potential dangers of AI, despite tens of billions of dollars being poured into the development of the tech.

But whether governments will heed these warnings remains to be seen. The news comes the same week as the Europen Union passing the world’s first major act to regulate AI, possibly setting the tone for future AI regulations in other parts of the world.

It’s an alarming report that’s bound to raise eyebrows, especially given the current state of AI regulation in the US. Are the authors’ concerns warranted — or are these overblown claims, with recommendations that amount to government overreach and stifle innovation?

After all, as the report notes on its first page, it doesn’t “reflect the views of the United States Department of State or the United States Government.”

“I think that this recommendation is extremely unlikely to be adopted by the United States government,” Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies, told TIME.

More on AI extinction: Scientists Say This Is the Probability AI Will Drive Humans Extinct

Share This Article

Go to Source