“Safe superintelligence should have the property that it will not harm humanity at a large scale.”
Keep It Vague
After leaving OpenAI under a dark cloud, founding member and former chief scientist Ilya Sutskever is starting his own firm to bring about “safe” artificial superintelligence.
In a post on X-formerly-Twitter, the man who orchestrated OpenAI CEO Sam Altman’s temporary ouster — and who was left in limbo for six months over it before his ultimate departure last month — said that he’s “starting a new company” that he calls Safe Superintelligence Inc, or SSI for short.
“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” Sutskever continued in a subsequent tweet. “We will do it through revolutionary breakthroughs produced by a small cracked team.”
Questions abound. Did Sutskever mean a “crack team”? Or his new team “cracked” in some way? Regardless, in an interview with Bloomberg about the new venture, Sutskever elaborated somewhat but kept things familiarly vague.
“At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told the outlet. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.”
So, you know, nothing too difficult.
AI Guys
Though not stated explicitly, that comment harkens back somewhat to the headline-grabbing Altman sacking that Sutskever led last fall.
While it remains unclear exactly why Sutskever and some of his fellow former OpenAI board members turned against Altman in last November’s “turkey-shoot clusterf*ck,” there was some speculation that it had to do with safety concerns about a secretive high-level AI project called Q* — pronounced “queue-star” — that Altman et al have refused to speak about. With the emphasis on “safety” in Sutskever’s new venture making its way into the project’s very name, it’s easy to see a link between the two.
In that same Bloomberg interview, Sutskever was vague not only about his specific reasons for founding the new firm but also about how it plans to make money — though according to one of his cofounders, former Apple AI lead Daniel Gross, money is no issue.
“Out of all the problems we face,” Gross told the outlet, “raising capital is not going to be one of them.”
While SSI certainly isn’t the only OpenAI competitor pursuing higher-level AI, its founders’ resumes lend it a certain cachet — and its route to incorporation has been, it seems, paved with some lofty intentions.
More on OpenAI: It Turns Out Apple Is Only Paying OpenAI in Exposure
Share This Article