OpenAI Hiring Detective to Find Who’s Leaking Its Precious Info

“Your expertise will be instrumental in protecting OpenAI against internal risks, thereby contributing to the broader societal benefits of artificial intelligence.”

OpenAI is looking to hire an “insider risk investigator” to “fortify our organization against internal security threats.”

According to the company’s job listing, first spotted by MSPowerUser, the gumshoe is supposed to help the company safeguard its assets by “analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks.” Per the Wayback Machine, the job listing has been up since mid-January.

“You’ll play a crucial role in safeguarding OpenAI’s assets by analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks,” the listing reads. “Your expertise will be instrumental in protecting OpenAI against internal risks, thereby contributing to the broader societal benefits of artificial intelligence.”

Basically, it seems like OpenAI is sick of all the high-profile leaks about its controversial tech, which have included important business decisions, infighting, and good old-fashioned customer data leaks.

Case in point was CEO Sam Altman’s high-profile sacking and rehiring last year, a turbulent period of days that were rife with revelations from insider sources and bizarre snapshots of OpenAI’s highly unusual company culture.

Messy board drama came to light, for instance, and it emerged that Microsoft, OpenAI’s biggest investor, was blindsided by the decision to oust Altman. And that’s without getting into detailed accounts of OpenAI chief scientist and former board member Ilya Sutskever burning effigies and leading ritualistic chants at the company.

Perhaps most embarrassing was when accounts of an experimental and secretive AI codenamed “Q*” leaked amid the drama, with Reuters and The Information reporting in November that some OpenAI leaders may have gotten spooked by the project, leading to Altman’s dismissal.

Despite its name, OpenAI is a for-profit business that has historically attempted to keep how its key products work under tight wraps.

But given the sheer level of insight we’ve gotten into the inner workings of the company, it’s clear that OpenAI is a leaky workplace — and it sounds like the inside risk investigator it’s hiring will be tasked with cracking down on that culture.

The only problem? Historically, some of the most eyebrow-raising claims and revelations about OpenAI haven’t come from anonymous leakers but from the company’s own leadership.

Altman himself is a nonstop fount of wild claims, from predicting that human-tier AI is coming soon to publicly musing that he’s fascinated by the Terminator.

The aforementioned Sutskever is no exception either. Remember back in 2022, before the release of ChatGPT, when he made headlines by sounding off that some neural networks might already be “slightly conscious”?

In other words, OpenAI’s new in-house detective won’t just need to be worried about the company’s rank and file. They might need to have some stern conversations with their own leadership as well.

More on OpenAI: Man’s Tinder AI Makes Date With Woman, Forgets to Tell Him and Stands Her Up

Share This Article

Go to Source