OpenAI CEO Signs Letter Warning AI Could Cause Human “Extinction”

Massive cognitive dissonance incoming.

AI-pocalypse Now

Sam Altman, the CEO of OpenAI, has signed onto a one-sentence open letter warning about the “extinction” dangers posed by artificial intelligence. Yes, we’re LOLing — after all, he’s probably doing more than anyone else alive to bring advanced AI into existence.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter released by the Center for AI Safety reads.

Altman is joined by Harvard legal scholar Laurence Tribe, Google DeepMind CEO Demis Hassabis, and Ilya Sutskever, his fellow OpenAI cofounder currently acting as the firm’s chief scientist. As of now, though, Altman is probably the biggest-deal signatory on the list, which is kind of ironic considering that his company has done more to push AI into the real world than pretty much anyone else ever.

To be fair, while it’s definitely a good thing that the CEO of the most recognizable AI firm in the world is concerned about its risk, there’s more than a hint of irony in someone like Altman — who, lest we fail to mention, is a doomsday prepper — talking out of one side of his mouth about the potential for AI apocalypse when he, like many of the other signatories, are actively working to build human-level or even superhuman AIs.

Precedented

The history of Altman’s complicated stance on AI risk goes deeper than this letter, too.

OpenAI was, as Futurism and many other outlets will regularly remind you, cofounded by Elon Musk, Altman, Sutskever, and a few others back in 2015 as a research lab with the pretty explicit goal of countering “bad” AI and promoting responsible use of the technology.

In early 2019, however, Musk left OpenAI over apparent disagreements in direction. Soon afterward, the firm that had been founded as a non-profit announced that it was going to be for-profit moving forward.

Since ChatGPT veritably changed the game at the end of 2022, Altman has repeatedly suggested that the future of AI freaks him out — even as he brags about how much better it’s going to make the world. If you’re confused by the dissonance there, you’re not alone.

While these sorts of disparate statements and actions do indeed sow confusion about the character and goals of the man leading the AI company that seems to be positioning itself to take over the world, Altman is, at the end of the day, looking out for his bottom line — and if saying the right things about its risks grows his and OpenAI’s profile, so be it.

More on OpenAI: OpenAI Annoyed by Lobbyists Seizing on ChatGPT

Share This Article

Go to Source