“We have a shred of a chance that humanity survives.”
Terminator Vision
The notoriously pessimistic AI researcher Eliezer Yudkowsky is back with a new prediction about the future of humankind.
“If you put me to a wall,” he told The Guardian in a fascinating new interview, “and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.”
If you’re wondering what “remaining timeline” means in this context, The Guardian‘s Tom Lamont interpreted it as the “machine-wrought end of all things,” a “Terminator-like apocalypse,” or a “Matrix hellscape.”
“The difficulty is, people do not realise,” Yudkowsky, the founder of the Machine Intelligence Research Institute in California, told the newspaper. “We have a shred of a chance that humanity survives.”
Bomb Squad
The entire Guardian piece is worth a read. Lamont spoke to many prominent figures in the space, ranging from Brian Merchant to Molly Crabapple, with the throughline being skepticism about the idea that just because a new tech comes along, we need to adopt it even if it’s not good for people.
These days, the focus of much of that critique is AI. Why, critics contend, should we treat the tech as inevitable even if it seems poised to eliminate and destabilize large numbers of jobs?
Or, in Yudkowsky’s case, if the tech likely presents an existential threat. His remarks were the most provocative in the piece, which probably isn’t surprising given his history. AI watchers may remember last year, for instance, when he called for bombing data centers to halt the rise of AI.
He’s rethought that particular claim, he told The Guardian — but only slightly: he stands behind the idea of bombing data centers, he said, but no longer thinks that nuclear weapons should be used to target them.
“I would pick more careful phrasing now,” he told the newspaper.
More on AI: Oppenheimer’s Grandson Signs Letter Saying AI Threatens “Life on Earth”
Share This Article