And he predicted this years before ChatGPT dropped, too.
Big Changes
Back when OpenAI was only a household name in the San Francisco Bay area, cofounder and chief scientist Ilya Sutskever warned that the technology his company was building was going to change the world — and not, perhaps, in a manner that benefits humans.
“AI is a great thing, because AI will solve all the problems that we have today,” Sutskever told documentarian Tonje Hessen Schei in a new mini-documentary released by The Guardian. “It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems.”
Recorded between the years 2016 and 2019, Schei’s short film about one of the AI industry’s most inscrutable figures explores his frame of mind at the time OpenAI was building the tech that provides the groundwork for the now-viral ChatGPT. It’s clear that even before the firm veritably changed the world, the people constructing its AI knew that what they were making was going to be revolutionary — and were already grappling with its implications.
AGI Daze
In this new short film, Sutskever seemed awfully sanguine about the potential for artificial general intelligence, or AGI, to be attained fairly soon. Though definitions differ, the OpenAI co-founder described AGI as a “computer system that can do any job, or any task that a human does, but only better.”
Though he doesn’t mention it directly, the machine learning expert who claimed early in 2022 that some LLMs may be “slightly conscious” espoused the tenets of AI alignment, an endeavor to make sure present and future AIs are created to “be aligned with our goals,” whatever those may be.
Sutskever went on to compare the goals of any unfettered and misaligned future AGIs to familiar behavior by us humans, who treat animals badly not because we hate them but because it’s convenient to do so.
“I think a good analogy would be the way humans treat animals,” he said. “It’s not that we hate animals. I think humans love animals and have a lot of affection for them. But when the time comes to build a highway between two cities, we’re not asking the animals for permission. We just do it because it’s important for us.”
“I think by default, that’s the kind of relationship that’s going to be between us and AGIs,” Sutskever continued, “which are truly autonomous and operating on their own behalf.”
That’s a pretty scary thought, but one that the OpenAI chief scientist didn’t seem to bat much of an eye at — except to say that it will be “extremely important” that humans “program [AGIs] correctly.”
“If this is not done,” he concluded, “then the nature of evolution, of natural selection, favors those systems that prioritize their own survival.”
Not bleak at all!
More on AGI: Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years
Share This Article