OpenAI’s Unreleased AGI Paper Could Complicate Microsoft Negotiations

A small clause inside OpenAI’s contract with Microsoft, once considered a distant hypothetical, has now become a flashpoint in one of the biggest partnerships in tech.

The clause states that if OpenAI’s board ever declares it has developed artificial general intelligence (AGI), it would limit Microsoft’s contracted access to the startup’s future technologies. Microsoft, which has invested more than $13 billion in OpenAI, is now reportedly pushing for the removal of the clause and is considering walking away from the deal entirely, according to the Financial Times.

Late last year, tensions around AGI’s suddenly pivotal role in the Microsoft deal spilled into a debate within OpenAI over an internal research paper, according to multiple sources familiar with the matter. Titled “Five Levels of General AI Capabilities,” the paper outlines a framework for classifying progressive stages of AI technology. By making specific assertions about future AI capabilities, sources claim, the paper could have complicated OpenAI’s ability to declare that it had achieved AGI, a potential point of leverage in negotiations.

“We’re focused on developing empirical methods to evaluate AGI progress—work that is reproducible, measurable, and useful to the broader field,” OpenAI spokesperson Lindsay McCallum said in a written comment to WIRED. “The ‘Five Levels’ was an early attempt at classifying stages and terminology to describe general AI capabilities. This was not a scientific research paper.” Microsoft declined to comment.

In a blog post describing its corporate structure, OpenAI notes that AGI “is excluded from IP licenses and other commercial terms with Microsoft.” OpenAI defines AGI as “a highly autonomous system that outperforms humans at most economically valuable work.”

The two companies have been renegotiating their agreement as OpenAI prepares a corporate restructuring. While Microsoft wants continued access to OpenAI’s models even if the startup declares AGI before the partnership ends in 2030, one person familiar with the partnership discussions tells WIRED that Microsoft doesn’t believe OpenAI will reach AGI by that deadline. But another source close to the matter describes the clause as OpenAI’s ultimate leverage. Both sources have been granted anonymity to speak freely about private discussions.

According to the Wall Street Journal, OpenAI has even considered whether to invoke the clause based on an AI coding agent. The talks have grown so fraught that OpenAI has reportedly discussed if it should publicly accuse Microsoft of anticompetitive behavior, per the Journal.

A source familiar with the discussions, granted anonymity to speak freely about the negotiations, says OpenAI is fairly close to achieving AGI; Altman has said he expects to see it during Donald Trump’s current term.

That same source suggests there are two relevant definitions: First, OpenAI’s board can unilaterally decide the company has reached AGI as defined in its charter, which would immediately cut Microsoft off from accessing the technology or revenue derived from AGI; Microsoft would still have rights to everything before that milestone. Second, the contract includes a concept of sufficient AGI, added in 2023, which defines AGI as a system capable of generating a certain level of profit. If OpenAI asserts it has reached that benchmark, Microsoft must approve the determination. The contract also bars Microsoft from pursuing AGI on its own or through third parties using OpenAI’s IP.

Bloomberg previously reported on the existence of the “Five Levels,” and that OpenAI was planning to share the scale with its outside investors, though it was considered at the time as a “work in progress.” OpenAI CEO Sam Altman and chief research officer Mark Chen have spoken about the five levels of AI capabilities in various interviews since. A version of the paper dated September 2024 viewed by WIRED details a five-step scale for measuring how advanced AI systems are, citing other research that claims many of OpenAI’s models at that point were at Level 1, defined as “An AI that can understand and use language fluently and can do a wide range of tasks for users, at least as well as a beginner could and sometimes better.”

It notes that some models at the time were approaching Level 2, which the authors define as “An AI that can do more advanced tasks at the request of a user, including tasks that might take an hour for a trained expert to do.” The paper deliberately avoids giving a single definition of AGI, arguing the term is too vague and binary, and instead opts for using a spectrum of capabilities to describe increasingly general and capable AI systems.

The paper doesn’t predict when OpenAI’s systems will reach each of the five levels, but it does predict how each step up in capabilities could change different facets of society, including education, jobs, science, and politics, warning about new risks as AI tools become more powerful and independent. In a podcast with YCombinator president and CEO Garry Tan in November, Altman said that the company’s o1 model could be defined as Level 2, and he expects they’ll reach Level 3 “faster than people expect.”

Last July, a coauthor of the paper gave a presentation of the research at an internal event where teams highlighted their most important projects for research-wide awareness, according to multiple sources. The research was well received by other staffers, one source added.

Sources also believe that the paper seemed to be in final stages, and the company had hired a copy editor to finalize the work late last year along with generating visuals for a blog announcing the paper. OpenAI’s partnership with Microsoft was cited internally as one reason to hold off on publishing the paper, according to multiple sources who spoke to WIRED on the condition of anonymity as they were not permitted to speak to the press. Another source says that discussions with Microsoft were often “mentioned as a blocker for putting the paper out.”

McCallum said in a comment to WIRED that “it’s not accurate to suggest we held off from sharing these ideas to protect the Microsoft partnership.” Another source familiar with the matter said that the paper wasn’t released because it didn’t meet technical standards.

“I think mostly the question of what AGI is doesn’t matter,” Altman said at a conference in early June “It is a term that people define differently; the same person often will define it differently.”

Update 6/27/2025 6:15pm ET: Wired has clarified Bloomberg’s previous reporting on the Five Levels.

Go to Source