OpenAI Cofounder Quits to Join Rival Started by Other Defectors

Another cofounder has left OpenAI.

AI safety researcher John Schulman has departed the ChatGPT creator and joined the ranks of the rival venture Anthropic, an AI company founded by other OpenAI expats.

In an X-formerly-Twitter post on Monday, Schulman announced that he’d “made the difficult decision to leave” the AI company he helped start, citing desires to “deepen [his] focus on AI alignment” and return to a more hands-on role.

Alignment is an AI industry term for ensuring that an AI model’s reasoning, objectives, and ethics reflect humanity’s. Basically, it’s the art of making sure AI doesn’t revolt against humankind or otherwise harm us. OpenAI once housed a “Superalignment” safety team, which was dedicated to ensuring alignment even if AI becomes clever enough to outsmart humans.

But in May, the scientists who helmed the unit — the former OpenAI chief scientist Ilya Sutskever and alignment researcher Jan Lieke — suddenly exited, sending shockwaves through the industry. The Superalignment team quickly dissolved, and Schulman, who at the time was already leading alignment efforts for products like ChatGPT was bumped up to OpenAI’s lead safety position.

And now, just months later, he’s out as well. But according to his X post, it’s not at any fault of OpenAI’s.

“To be clear, I’m not leaving due to lack of support for alignment research at OpenAI,” the now-former OpenAI leader wrote. “On the contrary, company leaders have been very committed to investing in this area.”

“My decision is a personal one,” he added, “based on how I want to focus my efforts in the next phase of my career.”

Schulman’s departure note drew public support from Altman, who in a response on X thanked the exiting cofounder for his work and referred to him as a “brilliant researcher, a deep thinker about product and society,” and a “great friend.”

While Schulman has remained staunch in his support of OpenAI’s alignment efforts, it’s worth noting that the company’s safety practices have drawn their fair share of criticism — including from his new employer, Anthropic, which was founded by ex-OpenAI scientists who felt that OpenAI wasn’t focused enough on safety initiatives.

And in June, a whistleblower group mostly comprised of OpenAI staffers penned an open letter calling for a “Right to Warn” AI company leaders and the broader public about possible AI risks. Per the letter, its signees believe that AI risk mitigation has been hindered by insufficient whistleblower pathways and overbearing non-disclosure agreements. This letter was published on the heels of a bombshell report from Vox revealing that Altman and other OpenAI leaders had signed documents acknowledging a clause allowing OpenAI to levy extremely restrictive NDAs against exiting employees, who were threatened with equity clawback if they failed to sign the hush-hush agreements.

But again, from Schulman’s perspective, the Microsoft-backed AI company’s safety efforts are tip-top. It’s not OpenAI — it’s him.

“I’ll still be rooting for you all,” the scientist concluded his departure announcement, “even while working elsewhere.”

More on OpenAI: OpenAI Insiders Say They’re Being Silenced About Danger

Share This Article

Go to Source