Concerned about the United States’ brimming culture war? According to OpenAI CEO Sam Altman, you can go ahead and ignore it, actually — and instead focus on building artificial general intelligence (AGI), which would be AI that exceeds human capabilities, perhaps by a very wide margin.
“Here is an alternative path for society: ignore the culture war. Ignore the attention war,” Altman tweeted on Sunday, encouraging readers instead to “make safe AGI. Make fusion. Make people smarter and healthier. Make 20 other things of that magnitude.”
“Start radical growth, inclusivity, and optimism,” Altman continued, rounding out the optimistic proposition with a particularly Star Trek idea: “Expand throughout the universe.”
Though it’s a little vague, Altman’s musing certainly seems to imply that successfully creating AGI would play a pivotal role in solving pretty much all of humanity’s problems, from cracking the fusion code and solving the clean energy crisis to curing disease to “20 other things of that magnitude,” whatever those 20 other things may be. (Altman had tweeted earlier in the day that “AI is the tech the world has always wanted,” which seems to speak to such an outlook as well.)
And if that is what Altman’s implying? That’s some seriously next-level AI optimism — indeed, this description of the future could arguably be called an AI utopia — especially when you consider that Altman and his OpenAI staffers pretty openly admit that AGI could also destroy the world as we know it.
To that end, the OpenAI CEO often offers polarizing takes on whether AI may ultimately end the world or save it, telling The New York Times as recently as March that he believes AI will either destroy the world or make a ton of money.
Others in the CEO’s circle seem to have taken note of Altman’s oft-conflicting outlooks on AI’s potential impact.
“In a single conversation,” Kelly Sims, a board adviser to OpenAI and a partner at Thiel Capital, told the NYT in March, “[Altman] is both sides of the debate club.”
And while optimism is generally a good thing, Altman’s advice to his followers seems a bit oversimplified. Humanity’s problems don’t just hinge on whether we’re paying attention to talk of the “woke mind virus,” and considering that inflammatory language hurts real people in the real world, not everyone has the luxury of ignoring the brewing “culture war” that Altman’s speaking to.
And on the AGI side, it’s true that AGI could, in theory, give humans a helping hand in curing some of our ills. But such an AGI, and AGI as a concept altogether, is still entirely theoretical. Many experts doubt that such a system could ever be realized at all, and if it is, we haven’t figured out how to make existing AIs safe and unbiased. Ensuring that a far more advanced AGI is benevolent is a tall — and perhaps impossible — task.
In any case, we’re looking forward to seeing which side of the AI optimism bed Altman wakes up on tomorrow.
More on AI friendliness scale: Ex-OpenAI Safety Researcher Says There’s a 20% Chance of AI Apocalypse