OpenAI Reportedly Hitting Law of Diminishing Returns as It Pours Computing Resources Into AI

This could be a major problem.

Age of Wonder

Reports are emerging that OpenAI is hitting a wall as it continues to pour more computing power into its much-hyped large language models (LLMs) like ChatGPT in a bid for more intelligent outputs.

AI models need loads of training data and computing power to operate at scale. But in an interview with Reuters, recently-exited OpenAI cofounder Ilya Sutskever claimed that the firm’s recent tests trying to scale up its models suggest that those efforts have plateaued.

“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again,” Sutskever, a staunch believer in the forthcoming arrival of so-called artificial general intelligence (AGI) or human-level AI, told Reuters. “Everyone is looking for the next thing.”

While it’s unclear what exactly that “next thing” may be, Sutskever’s admission — which comes, notably, just under a year after he moved to oust OpenAI CEO Sam Altman and was subsequently sidelined until his eventual departure — seems to dovetail with other recent claims and conclusions: that AI companies, and OpenAI specifically, are butting up against the law of diminishing returns.

Bounded Leaps

Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of “leaps” users have come to expect in the wake of its game-changing ChatGPT release in December 2022.

This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there’s ever more data and computing power to feed the models — which is a big “if,” given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or “scale” at a consistent rate.

Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had “reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data.”

While Peleg’s commentary could just be gossip, researchers have been warning for years now that LLMs would eventually hit this wall. Given the insatiably high demand for powerful AI chips — and that firms are now training their models on AI-generated data — it doesn’t take a machine learning expert to wonder whether the low-hanging fruit is running out.

“I think it is safe to assume that all major players have reached the limits of training longer and collecting more data already,” Peleg continued. “It is all about data quality now.. which takes time.”

More on AI crises: Sam Altman Says the Main Thing He’s Excited About Next Year Is Achieving AGI

Share This Article

Go to Source