Highlights from Musk v. Altman.
Highlights from Musk v. Altman.


This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.
Elon Musk first sued OpenAI in February 2024. Despite OpenAI’s repeated attempts to throw it out, the case is now headed to a jury trial on April 27th in Northern California federal court.
Musk’s main allegation is that OpenAI and its leaders abandoned the company’s original nonprofit mission that he funded. In turn, OpenAI has treated Musk’s claims as sour grapes. U.S. District Judge Yvonne Gonzalez Rogers recently decided that the case warranted going to trial, saying in court that “part of this is about whether a jury believes the people who will testify and whether they are credible.”
Last week, thousands of pages of evidence from the case were unsealed, including partial 2025 depositions of most of the key players involved, including Sam Altman, Ilya Sutskever, Greg Brockman, Mira Murati, and Satya Nadella, along with ex-board members Helen Toner and Tasha McCauley — both of whom played key roles in the 2023 firing of Altman.
Bits and pieces of this evidence have started trickling out in recent days, such as the news that Sutskever owned a whopping $4 billion in vested OpenAI shares when Altman was briefly fired two years ago. Altogether, the unsealed evidence offers a fascinating look not only at OpenAI’s early days but also at the circumstances surrounding Altman’s firing and Microsoft’s complex relationship with OpenAI.
I’ve been covering OpenAI in depth for a while, and I closely reported on the whirlwind few days when Altman was fired and then rehired in late 2023. It’s through that lens that I’ve pulled out the below highlights from the evidence in Musk v. Altman:
Sutskever had early concerns about treating open-source AI as a “side show.”
In 2022, OpenAI’s leaders seemed quite concerned about the prominence of open-source lab Stability AI, and Sutskever voiced his worry over text with Murati and others:
Sutskever: My trepidation around open source is that we’re treating it as a side show, eg def not going far enough to really hurt stability
Murati: We’re missing the opportunity to set standards with this massive growing group of devs, people are hungry to build things and we should lean in and bring our tech to as many people as possible, long term maximize our chance of maintaining lead, reducing competition
But if we do everything to get this in a couple of weeks at any cost out bc we heard stability is open sourcing similar model, that’s not in line at all with my motivations
OpenAI leaders were divided over early investor Reid Hoffman’s decision to start a rival AI lab, Inflection.
They were also already considering prohibiting investors from backing competing labs. From an October 2022 exchange:
Sutskever: I guess I just felt betrayed by him founding a direct competitor while simultaneously telling me that “I could not possibly imagine you’d find it objectionable”
Altman: here’s how id summarize my thoughts on this:
pros: he supported us in a moment where no one else would and it was pretty existential–i think openai would have been pretty fucked if he hasn’t stepped up then. also, he was instrumental to getting the first MSFT deal done, and has generally been quite helpful with MSFT related stuff he is generally a good board member.
cons: he is very motivated by `collecting’ status. although i personally think he cares much more about openai than inflection, he was blinded enough by the startup of being able to call himself the cofounder of a company he made an uncareful decision.
also, at this point, i think at this point openai has the leverage to ask for a soft promise for new investors not to invest in competitors, but only a select few companies ever get to do that)
Brockman: oh also an aside, after taking to @Sam Altman, I’m planning to meet Patrick Collison tmrw and demo dv3. Will ask if he’d be interested in participating in the tender under the condition of not investing in AGI/big model competitors
Brockman wrote in his diary that he wanted to be a billionaire.
From his deposition:
Q: Why did you write “Financially, what will take me to 1 billion?”
A: I think if we were going to do a for-profit entity, that I started to think about what would be motivating financial reward in that case as a secondary consideration.
Q: What was the primary consideration?
A: Primary consideration was would we be able to pursue and achieve the mission.
Q: How important was the secondary consideration to you?
A: The second consideration definitely mattered.
Q: At this point, did you aspire to be a billionaire?
A: My primary motivation was to the mission.
Q: Was your secondary motivation to be a billionaire?
A: I believe that as a — one thing I was definitely motivated by was the idea — I definitely had as a motivation that, yeah, potentially getting to $1 billion.
Q: So we know you achieved that goal at some point. Do you know precisely what day that happened?
A: I do not know what day precisely that happened.
Q: When was the first day you realized you had surpassed that goal?
A: I do not know what day I would say my, at least on paper, net worth would’ve exceeded 1 billion.
Nadella was worried about Microsoft’s position in AI when he started looking at OpenAI.
From his deposition:
Q: Did you feel that your progress was moving more slowly than you had liked?
A: I mean, always as a CEO of a company, I feel my job is to sort of be dissatisfied with the rate of progress at all times. And so “yes” would be the answer, which is both in the absolute sense, which is, can we build products that are more capable in any particular domain, and also, you know, vis-à-vis competition.
There were others achieving things that we looked at and said, “Hey, that’s great, and so how can we make sure we are competitive with it.”
Nadella almost wrote a book about AI called An Inflection Point.
According to an exhibit filed in the case, it was co-written with Marco Iansiti and was in development in 2023. From the first chapter:
On Wednesday, August 24, 2022, with the Pacific Northwest summer showing all of its beauty, Bill Gates hosted a dinner at his home on Lake Washington, just a few miles from the Microsoft campus. No longer a Microsoft board member or even Microsoft’s largest shareholder, Bill remained the iconic co-founder and trusted advisor of the company’s senior technical leaders. Satya suggested the gathering, which included Chief Technology Officer, Kevin Scott, and a handful of top researchers. Food and drinks would be served, but the main entrée was a hush- hush demo by OpenAl founder Sam Altman of a forthcoming release of ChatGPT powered by GPT-4, an AI built on Large Language Models (LLMs). Bill had long encouraged researchers to develop a truly accomplished AI assistant but had voiced his skepticism about this particular approach.
Microsoft beat out Amazon when it initially started working with OpenAI.
Musk was opposed to working with Jeff Bezos and wrote the following in an early email to Altman: “I think Jeff is a bit of a tool and Satya is not, so I slightly prefer Microsoft, but I hate their marketing dept.” Altman responded that Amazon had “started really dicking us around.”
The upside on Microsoft’s initial $1 billion investment in OpenAI was capped at $500 billion.
From a filing written by Musk’s lawyers:
In November 2018, after dinner with Altman, Scott told Nadella that OpenAI’s new corporate structure offered both “a commercial vehicle for monetizing Open AI IP” and investment returns “capped at $500B.” Altman claimed the nonprofit would eventually benefit because — though OpenAI had yet to make a single dollar in returns — “[i]f [OpenAI] ever [does] get to $500B in returns, the balance over that goes directly to the 501(c)3.”
Microsoft’s board initially approved a capital investment of $2 billion. But ultimately, it decided to limit its initial investment to $1 billion in the hopes that a smaller investment would “press OpenAI to commercialize,” in direct contravention of the nonprofit’s stated founding principles. In exchange for its investment, Microsoft received a convertible limited partnership interest and rights to OpenAI’s profits, with returns “capped” at 2000% of its $1 billion investment.
Microsoft’s CFO noted in an internal email that the “cap is actually larger than 90% of public companies,” and the limit on Microsoft’s profits is not “terribly constraining nor terribly altruistic.” It was, in fact, “a good investment.” At Microsoft’s request, OpenAI agreed to keep any mention of Microsoft’s promised 2000% return on its investment out of its public announcement.
The second update to Microsoft’s partnership with OpenAI in 2021 included another $2 billion investment that wasn’t reported and came with a lower upside.
From a filing written by Musk’s lawyers:
In March 2021, Microsoft quietly invested another $2 billion in OpenAI. Neither OpenAI nor Microsoft publicly announced the investment, which was subject to a lower 6x return multiple.
In place of its 2019 license to a single OpenAI model, Microsoft secured rights to commercialize any OpenAI model developed during the term of the agreement (except AGI). Facilitating its commercial use of OpenAI’s IP, Microsoft was permitted to embed up to ten of its employees on-site at OpenAI.
Anticipating increased product commercialization, Microsoft and OpenAI agreed to share any resulting revenue.
Just three months later, in June 2021, Microsoft released GitHub CoPilot — its first product incorporating OpenAI’s technology.
Microsoft’s next $10 billion investment in OpenAI came with pressure from Nadella to go after the enterprise market and with more strings attached.
From a filing written by Musk’s lawyers:
Prodding OpenAI to accelerate its own product development, Microsoft told Altman that OpenAI needed to generate $100 million in revenues to secure the next $10 billion commitment from Microsoft. To meet that goal, OpenAI expanded the team responsible for taking products to market and tried to expand its “enterprise business.”
In the summer of 2022, OpenAI began negotiating with Microsoft a new $10 billion investment. That November, OpenAI released ChatGPT. It was an instant hit. Nadella urged Altman to release a paid version and persistently checked on the progress of its commercialization.
Over the next several months, OpenAI secured Microsoft’s $10 billion investment, and the parties again amended the JDCA. OpenAI also changed its corporate structure.
The 2023 agreement “cap[ped]” Microsoft’s return on this investment at 600%, or $60 billion to start, but increased Microsoft’s profit “cap” by 20% per year. Microsoft would receive 49% of OpenAI’s profits, while the OpenAI nonprofit entity would recover just 2% of OpenAI’s profits — at least until all outside investors were paid out their investment returns, valued in total at $261 billion.
Underscoring the profit-focused aim of the partnership, the 2023 JDCA was specifically structured to “remove the impediments in commercialization.”
Microsoft negotiated expanded IP rights to include all OpenAI IP developed before or during the term of the agreement (excluding AGI), and the right to embed up to 20 employees at OpenAI.
Finally, Microsoft and OpenAI established an 80%-20% revenue share.
OpenAI considered adding AI safety experts Dan Hendrycks, Paul Christiano, Jacob Steinhardt, and Ajeya Cotra to the board before Altman was fired.
Altman apparently wanted board members with more “commercial” experience. From Toner’s deposition, in reference to internal discussions about expanding the board before it fired Altman in late 2023:
Q: Was it your impression that Mr. Altman was dragging his feet in these discussions?
A: Yes. I think that’s a fair description.
Q: Did Mr. Altman’s actions result in the board being deadlocked over any proposal to add an additional AI safety board member?
A: I’d say he contributed to us significantly being deadlocked, yes.
Q: Did Mr. Altman propose different candidates to the board?
A: Yes.
Q: Were Mr. Altman’s alternative candidates also AI safety experts or did they have different backgrounds?
A: To the best of my recollection, he generally proposed candidates with more of a commercial startup background.
Altman and Brockman proposed kicking Adam D’Angelo off the board before Altman was fired.
From Toner’s deposition:
Adam runs a company called Quora, which has a product called Poe, which uses large language models, including those of OpenAI and some of its competitors.
The way I perceived it was after GPT-4 was demoed to the board in summer 2022, Adam began taking his responsibilities as a board member more seriously, because the technology seemed to be advancing, and he became a more engaged board member.
In the lead-up to — between that time in summer 2022 and April 2023, we had had several conversations as a board about what kinds of conflict of interest were acceptable or unacceptable on the board, because many potential board members, and current board members, had various involvements with various AI companies.
So we had fairly detailed discussions about what was an unacceptable conflict of interest and had decided that being closely involved with a company that was training its own large language models, you know, highly advanced frontier language models that would compete with OpenAI’s, was the bar for excessive conflict of interest.
So it was surprising to me when Sam emailed the board in April 2023 saying that Adam’s conflict of interest had grown too large and seemed like he needed to step off the board, and did we agree. Because Adam’s company produced a product that used others’ LLMs, they didn’t — they weren’t training their own. So clearly it didn’t meet the conflict of interest criteria we had all discussed.
When I said as much via email, Greg Brockman chimed in with a different reason to remove Adam, namely that his position as both a customer and a board member was creating communication difficulties internally. I forget who exactly said what on the email chain, but other board members raised questions about that or wanted to know more about that.
Ultimately, I spoke to Sam on the phone, and we sort of — at my urging, we agreed that, surely, the step before just removing Adam from the board, if the problem was how he was communicating inside the company, surely, the next step would be to discuss that with him and see if we can improve the situation. Sam said he would do that, he would have a conversation with Adam, to try and improve how he was communicating inside the company. And then the situation seemed to go away.
I later found out that Sam had never had that conversation with Adam, or that he had talked with him but had never actually tried to solve that problem, but, instead, had just said the only thing that he, Sam, didn’t like about Adam’s product Poe was that it used Anthropic models, because Anthropic was a competitor.
So, all in all, the situation seemed to me like there wasn’t actually a clear, concrete reason to ask Adam to move off the board, but that Sam and Greg were sort of searching for an excuse because he had been providing more active governance of the company.
Altman didn’t initially tell OpenAI’s board that he was personally running a company VC fund.
From Toner’s deposition:
Adam D’Angelo was at a dinner with some other founders, investors, startup people, who were asking him about the structure of the startup fund and potential conflicts of interest between the startup fund and OpenAI’s investors more generally.
And after that conversation, Adam emailed the board, including Sam, perhaps a couple of other OpenAI executives, to understand the structure better.
And in the resulting back-and-forth, we learned that Sam was the, as I understand it, the owner of the fund. So the initial conversation was around whether it was fair for OpenAI’s investors that OpenAI was sort of contributing to this other fund and was also contributing sort of engineering expertise and time to portfolio companies in the startup fund in ways that may not — where the benefit may not accrue back to OpenAI investors.
After we learned that Sam had a financial stake in the fund, we also had concerns about the fact that he had not disclosed that, given that his position on the board was one of a supposedly independent board director, meaning one with no financial interest in OpenAI.
Altman proposed making a donation to then-Congressman Will Hurd while he was in talks to join the OpenAI board.
From Toner’s deposition:
Sam also suggested that he wanted to make a large, I believe, several-hundred-thousand-dollar campaign contribution to Will, while still expecting him to come back onto the board.
He did not go ahead with this donation because Tasha, Adam, and I all said it seemed very inappropriate. But to me, the fact that he was considering that, the fact that he might have discussed it with Will in advance, the fact it was an option, was just a sign of total disregard for the board’s independence or ability to provide meaningful oversight of the company and the CEO.
Q: And that several-hundred-thousand-dollar campaign contribution, was it — did Mr. Altman discuss that that was going to come from him personally?
A: Yes, to the best of my recollection.
There were concerns about Altman’s closeness with the current OpenAI board chairman, Bret Taylor.
From McCauley’s deposition:
I had more context on Bret Taylor than I did on Larry [Summers], and I had concerns about his ability to be — yeah, to make disinterested decisions in a way that was, wasn’t partial to Sam. I mean, you know, we had — he had been proposed by Sam for the board previously when we were there and when we were going through the process of expanding the board. And by the best of my recollection, you know, Sam had — had made recommendations on a number of different people. He was favorable to Bret Taylor.
If I recall correctly, Adam had — I think I recall correctly that Adam had interviewed Bret in the process of considering other candidates, and that one of the — prior to all of this — sorry — like, in the process that we were running over this — you know, in the months prior, when we were trying to expand the board; and at that time, that — one of the takeaways from that conversation was that — I think — I’m going to try to recall this exactly as possible, but it was I think Bret may have expressed concern that — concern around the — the conflicts. I think that he had said he had known Sam for a very long time and had a lot of connections to Sam and whatnot.
There were at least six key issues that led the board to fire Altman.
From McCauley’s deposition:
Q: Was one of those incidents Mr. Altman’s foot-dragging over adding an AI safety expert to the board?
A: That — that was — you know, I think the fact that that process was unable to result in adding independent members and an AI safety member to the board, it exacerbated our concerns, yes.
Q: And was another one of those incidents that — Mr. Altman’s representation that the three enhancements to GPT-4 had all been approved by the safety board?
A: Yes, that was a factor.
Q: Was another one of those incidents Mr. Altman’s failure to disclose that a GPT-4 test was released in India without joint safety board review?
A: Yes.
Q: Was another incident Mr. Altman’s failure to inform the board prior to ChatGPT’s release?
A: Yes.
Q: And was another incident Mr. Altman’s misrepresentation about you allegedly saying Ms. Toner should obviously leave the board?
A: Yes.
Q: And was another incident Mr. Altman’s misrepresentation that the legal department told him GPT-4 Turbo did not need safety board review?
A: Yes, that — that we saw screenshots to that effect.
Sutskever had $4 billion worth of vested equity in OpenAI as of November 2023.
A text exchange between Altman, Nadella, and OpenAI COO Brad Lightcap revealed the stake. As Microsoft was in discussions to hire Altman and most of the OpenAI team, Lightcap wrote that paying employees for their equity would cost $25 billion or $29 billion, depending on whether Sutskever’s vested shares were included. While it’s impossible to know for sure without more evidence, the exchange suggests that Sutskever was OpenAI’s largest individual shareholder at the time. It’s unclear if he has sold any shares since.
Sutskever thought “OpenAI would be destroyed” if Altman wasn’t rehired.
From his deposition:
Q: And why did you withdraw your support for Sam Altman being fired?
A: Because I thought that OpenAI would be destroyed.
Altman told Musk, “It really fucking hurts when you publicly attack OpenAI.”
From a February 2023 email exchange:
Altman: i remember seeing you in a tv interview a long time ago (maybe 60 minutes? where you being attacked by some guys, and you said they were heroes of yours and it was really tough.
well, you’re my hero and that’s what it feels like when you attack openai. totally get we have some screwed some stuff up, but we have worked incredibly hard to do the right thing, and i think we have ensured that neither google nor anyone else is on a path to have unilateral control over AGI, which i believe we both think is critical.
i am tremendously thankful for everything you’ve done to help —i dont think openai would have happened without you-and it really fucking hurts when you publicly attack openai
Musk: I hear you and it is certainly not my intention to be hurtful, for which I apologize, but the fate of civilization is at stake.
Altman: i agree with that, and i would really love to hear the things you think we should be doing differently/better.
it’s also not clear to me how the attacks on twitter help the fate of civilization, but that’s less important to me that getting to the right substance.
also, i checked with our team on recruiting from tesla. we really are doing very little relative to the size of the company, but i will make sure we don’t hurt tesla, i obviously think it’s a super important company.
“OpenAI has not yet done business with Helion but intends to if the technology works.”
Altman is personally the largest investor in Helion, which is building fusion power technology. From his deposition:
Q: While you were at Y Combinator, did you personally invest in any of the companies that Y Combinator sponsored?
A: I did.
Q: Which ones?
A: I couldn’t give you a list off the top of my head.
Q: Have any of those companies done business with OpenAI?
A: Yes.
Q: Which ones?
A: Our conflicts committee keeps track of all this and could tell you. I couldn’t do a list off the top of my head that would be exhaustive. Reddit is one. OpenAI has not yet done business with Helion but intends to if the technology works.
Altman thinks “things need to go right” for OpenAI to be worth $500 billion.
From his deposition:
Q: And do you personally agree that the company is worth at least 500 billion currently?
A: That was the willing buyer-willing seller market price, so I won’t argue with it.
Q: Apart from your faith in the willing buyers and willing sellers, do you agree, being the one who runs the company, that the company is worth at least $500 billion today?
A: If I were an outside market investor, I would — I think I would absolutely love to buy OpenAI shares at a 300-billion-dollar valuation, somewhat higher. At 500, I would start to say, “Could be, like, you know, things need to go right, but could be.”
Other standout quotes:
“No, I was not surprised, because I was used to the board not being very informed about things.”
– Toner responding to a question during her deposition about whether she was surprised by the original release of ChatGPT.
“I think there’s a real possibility that five or 10 years from now, people look back and think of the main role OpenAI played during the late 2010s/early 2020s as being the org that set off great excitement about and investment in AGI (and then lost its lead to other orgs).”
– Toner in a message relayed by Brockman to other OpenAI leaders.
“Because of this pattern of lying… as was being reported to me, people in the company were copying that behavior, and there was kind of a culture of lying and a culture of, you know, yeah, deceit. And I think for us, as a board… this was just extremely concerning.”
– McCauley in her deposition.
“I thought it would be one of the coolest things that humanity could ever build. I was a sci-fi nerd. I read a lot of books. I watched a lot of sci-fi TV and movies. And, you know, I thought it would be one of the most helpful things to help humanity prosper.”
– Altman, during his deposition, on why he wanted to join OpenAI.
“I mean, he played a lot of video games.”
– Altman’s response to questioning during his deposition about Musk’s involvement in the early days of OpenAI.
“I estimate that I spend and have spent all the way through about a quarter of my time recruiting for OpenAI.”
– Altman during his deposition.
“I think it’s hard to find people as successful as Elon Musk.”
– Sutskever during his deposition.
“It doesn’t matter who wins if everyone dies.”
– Brockman in an early exchange with OpenAI colleagues.