A former OpenAI worker says he quit the company after realizing that it was putting safety on the back burner to pursue profit.
During a recent episode of tech YouTuber Alex Kantrowitz’s podcast, former Superalignment team member William Saunders used an apt analogy to explain why he left.
“I really didn’t want to end up working for the Titanic of AI, and so that’s why I resigned,” he said. “During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic?”
Saunders argued that the Titanic may have been called “unsinkable, but at the same time there weren’t enough lifeboats for everyone and so when disaster struck, a lot of people died.”
The former employee accused OpenAI of “prioritizing getting out newer, shinier products,” just like White Star Line, the now-defunct British shipping company that built the doomed Titanic.
His comments highlight growing concerns over companies like OpenAI developing AI systems that are capable of superseding the abilities of humans, an idea dubbed artificial general intelligence (AG) — something that currently remains entirely theoretical, but is a source of great interest to executives like OpenAI’s Sam Altman.
We’ve already seen several other former employees coming forward, accusing leadership of turning a blind eye to these concerns and stifling oversight.
OpenAI’s commitment to safety is also shaky as far as its ever-shifting corporate structure is concerned. Earlier this year, OpenAI CEO Sam Altman announced that he was dissolving the safety-oriented Superalignment team — which Saunders was once a part of — and installing himself at the helm instead. At the time, the company said it was creating a new “safety and security committee.”
The leader of the now-dissolved team, former chief scientist Ilya Sutskever, announced last month that he was starting a new company called Safe Superintelligence Inc, which according to him will have a “straight shot, with one focus, one goal, and one product.”
Critics have long pointed out that despite Altman claiming from the very start that he was looking to realize AGI in a safe way, the company has played fast and loose with the rules and prioritized pursuing the release of shiny new chatbots, major rounds of funding, and billion-dollar partnerships with tech giants like Microsoft.
To Saunders, these concerns were enough of a reason for him to quit. Instead, he hoped OpenAI would operate more like NASA’s Apollo space program.
“Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely,” he told Kantrowitz.
“It is not possible to develop AGI or any new technology with zero risk,” he added in an email to Business Insider after the publication pointed out that the Apollo program saw its own fair share of safety oversights. “What I would like to see is the company taking all possible reasonable steps to prevent these risks.”
Of course, it remains unclear whether any of these risks will turn out to be justified. After all, we’re still far from an actual AI that can outwit a human, and experts have pointed out that the tech may not end up going anywhere.
Nonetheless, Saunders’ comments paint a worrying picture of how OpenAI is being run and the changes Altman has made over the last couple of years.
More on OpenAI: Expert Warns That AI Industry Due for Huge Collapse
Share This Article