OpenAI’s Chaos Linked to Super Powerful New AI It Secretly Built

Mystery lingers over why OpenAI CEO Sam Altman was thrown out of the company and then reinstated less than a week later — a dramatic reversal by the company’s non-profit board that spurned an astronomical amount of speculation.

Nonetheless, theories have floated to the top of the rumor mill. One of the most intriguing: that OpenAI might have been quietly working on a highly advanced AI that could have thrown the board into panic mode and sparked the dustup.

After all, OpenAI has long made it its primary mission to realize an artificial general intelligence (AGI) — loosely defined as an algorithm that can complete complex tasks as well as or even better than humans — to “benefit all of humanity,” in the words of Altman himself.

Whether the company’s actually getting closer to achieving this goal remains highly debatable. The company has also historically been highly secretive when it comes to its research, making it even more difficult to read the two leaves over recent weeks.

But an interesting new twist to the story suggests OpenAI may have been on the verge of a major leap forward, and that it may indeed have been related to the shakeup.

Last week, Reuters and The Information reported that some OpenAI leaders may have gotten spooked by a powerful new AI the company was working on called Q*, pronounced “Q star.” This new system was apparently seen by some as a significant step towards the company’s goal of establishing AGI, and is reportedly capable of solving grade school math problems.

According to Reuters, Mira Murati, a former OpenAI nonprofit board member who held the title of CEO for a very short period following Altman’s dismissal, acknowledged the existence of this new model in an internal message to staffers.

Reuters‘ sources claim Q* was one of many factors leading to Altman’s firing, triggering concerns about commercializing a product that still wasn’t entirely understood.

While school-grade math may not sound like a groundbreaking achievement, researchers have long seen such an ability as a considerable benchmark. Instead of simply predicting the next word in a sentence like the company’s GPT systems, an AI algorithm that could solve math problems would need to “plan” several steps ahead.

Think of it as a Sherlock Holmes-like entity that can string together clues to reach a conclusion.

“One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning,” explained Yann LeCun, “godfather of AI” and Meta’s chief AI scientist, in a tweet. “Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published ideas and results.”

“It is likely that Q* is OpenAI attempts at planning,” he added.

“If it has the ability to logically reason and reason about abstract concepts, which right now is what it really struggles with, that’s a pretty tremendous leap,” Charles Higgins, a cofounder of the AI-training startup Tromero, told Business Insider.

“Maths is about symbolically reasoning — saying, for example, ‘If X is bigger than Y and Y is bigger than Z, then X is bigger than Z,'” he added. “Language models traditionally really struggle at that because they don’t logically reason, they just have what are effectively intuitions.”

“In the case of math, we know existing AIs have been shown to be capable of undergraduate-level math but to struggle with anything more advanced,” Andrew Rogoyski, a director at the Surrey Institute for People-Centered AI, told BI. “However, if an AI can solve new, unseen problems, not just regurgitate or reshape existing knowledge, then this would be a big deal, even if the math is relatively simple.”

But is Q* really a breakthrough that could pose an actual existential threat? Experts aren’t convinced.

“I don’t think it immediately gets us to AGI or scary situations,” Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI, told MIT Technology Review.

“Solving elementary-school math problems is very, very different from pushing the boundaries of mathematics at the level of something a Fields medalist can do,” she added, referring to an international prize in mathematics.

“I think it’s symbolically very important,” Sophia Kalanovska, a fellow Tromero cofounder and PhD candidate, told BI. “On a practical level, I don’t think it’s going to end the world.”

In short, OpenAI’s algorithm — if it indeed exists and its results can withstand scrutiny — could represent meaningful progress in the company’s efforts to realize AGI, but with many caveats.

Was it the only factor behind Altman’s ousting? At this point, there’s plenty of evidence to believe there was more going on behind the scenes, including internal disagreements over the future of the company.

The extraordinary rise of AI has often been spurred on by seemingly outlandish claims, plenty of fearmongering, and a considerable amount of hype. The latest excitement surrounding OpenAI’s rumored follow-up to its GPT-4 model is likely no different.

One of Reuters‘ sources claimed that despite only performing grade school-level math problems, researchers were nonetheless optimistic about the latest model’s success.

But Gary Marcus, an AI expert and deep-learning critic, called out the discourse around the system as “wild extrapolation” in a recent post on his Substack.

“If I had a nickel for every extrapolation like that — today, it works for grade school students! next year, it will take over the world! — I’d be Musk-level rich,” he wrote, referring to OpenAI cofounder Elon Musk, who left OpenAI’s board over his own disagreements in 2018.

More on OpenAI: This Apparently Wasn’t the First Time Sam Altman Was Dramatically Fired

Share This Article

Go to Source