Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction

We already know that AI models are developing a propensity for lying — but that tendency may go far deeper, according to one alarm-sounding computer scientist.

As flagged by Gizmodo, this latest missive of AI doomerism comes from AI safety researcher Roman Yampolskiy, who made it on a somewhat surprising venue: shock jock Joe Rogan’s podcast, which occasionally features legitimate experts alongside garden-variety reactionaries and quacks.

During the July 3 episode of “The Joe Rogan experience,” Yampolskiy, who heralds from the University of Louisville in Kentucky, proffered that many of his colleagues believe there’s a double-digit chance that AI will lead to human extinction.

After Rogan claimed that many of the folks who run and staff AI companies think it will be “net positive for humanity,” the storied AI safety expert clapped back.

“It’s actually not true,” Yampolskiy countered. “All of them are on the record the same: this is going to kill us. Their doom levels are insanely high. Not like mine, but still, 20 to 30 percent chance that humanity dies is a lot.”

“Yeah, that’s pretty high,” the psychedelic enthusiast responded. “But yours is like 99.9 percent.”

The computer scientist didn’t argue, and instead offered a distillation of his AI anxiety: “we can’t control superintelligence indefinitely. It’s impossible.”

Later in the interview, Yampolskiy took another of Rogan’s quips — that he would “hide [his] abilities” were he an AI — and ran with it.

“We would not know,” the AI doomer said. “And some people think it’s already happening.”

Pointing out that AI systems “are smarter than they actually let us know,” Yampolskiy said that these advanced models “pretend to be dumber” to make us trust them and integrate them into our lives.

“It can just slowly become more useful,” he said of a hypothetically brilliant AI. “It can teach us to rely on it, trust it, and over a longer period of time, we’ll surrender control without ever voting on it.”

While the idea of an insidiously smart AI may seem like the stuff of sci-fi, Yampolskiy noted that the technology has already ingratiated itself to us in ways that could, ultimately, benefit such an AI overlord.

“You become kind of attached to it,” he explained. “And over time, as the systems become smarter, you become a kind of biological bottleneck… [AI] blocks you out from decision-making.”

As we’ve repeatedly seen, people are not only becoming addicted to AI, but also experiencing cognitive issues and even delusions after overusing it. It’s not too hard to imagine a society full of contented AI adherents being lulled into a false sense of security by the very technology that would, per Yampolskiy’s philosophy, seek to destroy us — and that’s a bleak vision of the future.

More on AI doom: Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack

Share This Article

Go to Source