Everyone’s still scrambling to find a plausible explanation as to why OpenAI CEO Sam Altman was suddenly fired from his position last Friday, a decision that has led to absolute carnage at the company and beyond.
Beyond some vague language accusing him of not being “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company’s nonprofit board has stayed frustratingly quiet as to why it sacked Altman.
And at the time of writing, the company’s future is still in the air, with the vast majority of employees ready to quit unless Altman is reinstated.
While we await more clarity on that front, it’s worth looking back at the possible reasoning behind Altman’s ousting. One particularly provocative possibility: there’s been plenty of feverish speculation surrounding Altman’s role in the company’s efforts to realize a beneficial artificial general intelligence (AGI) — its stated number one goal since it was founded in 2015 — and how that may have led to his dismissal.
Was the board cutting Altman out to rescue humanity from impending doom, in other words? It sounds very scifi, but then again so does the whole company.
Making matters even hazier is that we still haven’t fully agreed on a single definition of AGI, a term that roughly equates to a point at which an AI can conduct intellectual tasks on a level with us humans.
OpenAI’s own definition states that AGI is a “system that outperforms humans at most economically valuable work,” but that doesn’t quite capture the way some of OpenAI’s own leaders are talking about the notion. Just last week, days before he was ousted, Altman himself described AGI as a “magic intelligence in the sky” during an interview with the Financial Times, invoking a borderline God-like entity — language echoed by the company’s chief scientist Ilya Sutskever, who was instrumental in kicking Altman out of OpenAI.
But how close is OpenAI really to achieving that goal? Some have speculated that OpenAI’s board rushed to dump the former CEO because he was acting recklessly and didn’t sufficiently consider the risks of developing an AGI.
That line of reasoning could suggest the company may be closer than it’s letting on, especially considering the apparent urgency surrounding Altman’s firing (even major investor Microsoft was blindsided).
However, telling with any degree of certainty when we’ve actually come to the point where an AI algorithm is better at completing a given task than a human is far trickier than it sounds.
Some researchers have offered up possible frameworks to gauge if any given algorithm has achieved levels of AGI performance, but other experts argue it’s a transition that won’t simply happen overnight.
Earlier this year, Microsoft researchers claimed that OpenAI’s GPT-4 is showing “sparks” of an AGI, comments that were quickly criticized by their peers.
After Altman published a blog post about the topic in February, fleshing out his company’s goal of creating an AGI that “benefits all of humanity,” experts were left unimpressed.
“The term AGI is so loaded, it’s misleading to toss it around as though it’s a real thing with real meaning,” Bentley University mathematics professor Noah Giansiracusa argued in a tweet at the time. “It’s not a scientific concept, it’s a sci-fi marketing ploy.”
“Your system isn’t AGI, it isn’t a step towards AGI, and yet you’re dropping that in as if the reader is just supposed to nod along,” added University of Washington linguistics professor Emily Bender.
In short, we still don’t know how close OpenAI is to realizing its goal, and given what we’ve seen so far — heck, GPT-4 can’t even reliably tell truth from fiction — it’s likely going to take a lot more research to get there.
Under Altman’s leadership, OpenAI’s own core priorities have notably shifted. Last month, Semafor reported that the firm changed its purported “core values” on its website to focus them on AGI, swapping values on a job openings page from “Audacious,” “Thoughtful,” “Unpretentious,” and “Impact-driven” to “AGI focus” — the first on the list — “Intense and scrappy,” “Scale,” and “Team spirit.”
The timing of Altman’s firing could also offer clues. Early last week, the company proudly announced a new, more efficient version of its large language model called GPT-4 Turbo, as well as tools that allow users to create their own chatbots using its tech.
It’s still technically possible Altman’s moves to capitalize on the company’s financial successes, most notably ChatGPT, may have instilled fear among OpenAI’s board. The announcements last week triggered a frenzy, with OpenAI being forced to temporarily pause new sign-ups to its paid ChatGPT Plus service due to “overwhelming demand.”
For now, we’re reading tea leaves left by a deeply weird group of people.
Considering OpenAI chief scientist and board member Ilya Sutskever almost immediately regretted his central role in the board’s move to discharge Altman, it’s likely that the situation is far more complex than the company is letting on.
“I deeply regret my participation in the board’s actions,” he tweeted. “I never intended to harm OpenAI.”
“Why did you take such a drastic action?” SpaceX CEO Elon Musk, who cofounded OpenAI alongside Altman but left the company in 2018 over core disagreements and has since rung the alarm bells over the tech, replied.
“If OpenAI is doing something potentially dangerous to humanity, the world needs to know,” he added.
More on OpenAI: OpenAI Employees Say Firm’s Chief Scientist Has Been Making Strange Spiritual Claims
Share This Article