“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI.”
Damage Plan
OpenAI has had a tough week.
Its reveal of the upcoming ChatGPT 4o — that’s an “o” for “omni,” not a “40” as in the number — was impressive on a technical level, but CEO Sam Altman made an unforced error by comparing the talking AI to the dark 2013 romance film “Her,” kicking off a wave of negative headlines. Then things got even more tumultuous when a pair of high-powered researchers left the company in departures that looked like something between “getting pushed out” and “quitting in disgust.”
The optics were especially appalling because both had been working on OpenAI’s team dedicated to controlling any future superintelligent AI the company might create, with one of the two harshly criticizing OpenAI’s purported lack of commitment in that domain. And then things got even more embarrassing for the company when Vox reported that any workers leaving OpenAI had to sign a draconian nondisclosure agreement to retain their equity stipulating, among other things, that they could never criticize the company in the future.
It looks like the wall-to-wall criticism of all that has gotten under the skin of OpenAI’s leadership, because the company’s head is now in full damage control mode.
Sam I Am
Altman, as usual, is taking center stage in the company’s response — though so far he’s seemed to strategically focus on the equity side of the equation rather than the explosive claim that OpenAI is silencing former employees who might have ethical concerns about its work.
“We have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop,” he wrote on X-formerly-Twitter. “There was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication.”
“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have,” he continued. “The team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we’ll fix that too. Very sorry about this.”
Meanwhile, OpenAI president Greg Brockman published his own lengthy response to the situation — signed off by him and Altman — that managed to say very little in about 500 words.
“We know we can’t imagine every possible future scenario,” read the statement. “So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety.”
It’s worth pointing out that neither of these statements get quite as far as the non-financial takeaway of Vox‘s reporting: are Altman and Brockman saying that former employees can now sound off about the company’s approach to hot-button issues? Or are they just trying to defuse some of the outrage before the company continues to try to keep its ex-workers quiet when things die down?
We’ll definitely be watching to see which it is.
More on OpenAI: OpenAI Secretly Trained GPT-4 With More Than a Million Hours of Transcribed YouTube Videos
Share This Article