“The world isn’t ready, and we aren’t ready.”
Getting Warner
After former and current OpenAI employees released an open letter claiming they’re being silenced against raising safety issues, one of the letter’s signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip.
In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.
“OpenAI is really excited about building AGI,” Kokotajlo said, “and they are recklessly racing to be the first there.”
Kokotajlo’s spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn’t accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.
MF Doom
The term “p(doom),” which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.
The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology’s progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.
As noted in the open letter, Kokotajlo and his comrades — which includes former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the so-called “Godfather of AI” who left Google last year over similar concerns — are asserting their “right to warn” the public about the risks posed by AI.
Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.
Altman, per the former employee’s recounting, seemed to agree with him at the time, but over time it just felt like lip service.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.
“The world isn’t ready, and we aren’t ready,” he wrote in his email, which was shared with the NYT. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
Between the big-name exits and these sorts of terrifying predictions, the latest news out of OpenAI has been grim — and it’s hard to see it getting any sunnier moving forward.
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” the company said in a statement after the publication of this piece. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”
“This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company,” the statement continued.
More on OpenAI: Sam Altman Replaces OpenAI’s Fired Safety Team With Himself and His Cronies
Share This Article