OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.

Former OpenAI head of alignment Jan Leike (Jan Leike / X)

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned.

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

Go to Source