“There’s enormous uncertainty about what’s going to happen next.”
Weighing Possibilities
Longtime Googler Geoffrey Hinton, widely regarded as a “Godfather of AI” — yes, there are several — says he’s nervous that the technology he spent a lifetime building might take over the world. Cool!
In an appearance on CBS 60 Minutes this weekend, the renowned cognitive scientist and AI researcher — who made media waves earlier this year when he announced his departure from Google, citing regret over his life’s work as the reason for his surprise exit — set the tone by declaring that “for the first time ever,” humanity must grapple with the reality that something else on our planet is “more intelligent than us.” Perhaps the most interesting part of the exchange came when CBS host Scott Pelley asked Hinton whether super-intelligent AI could come to “take over from humanity.”
“I’m not saying it will happen,” Hinton told CBS host Scott Pelley. “If we could stop them ever wanting to, that would be great.”
“But,” Hinton added, “it’s not clear we can stop them ever wanting to.”
Great Unknown
As for the specifics of how AI might ultimately overtake its human overlords? The scientist speculated that autonomous agents might begin to “modify themselves.”
“That’s something we need to seriously worry about,” he added, elsewhere noting his concern over the black box aspect of the technology, or the lack of detailed understanding that researchers actually have of how exactly machine learning algorithms are really functioning.
We have a “very good” rough idea of how AI systems learn, the godfather figure told Pelley, “but as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.”
It’s a fair concern; regardless of whether a machine’s output is technically correct — say, it produces as many paperclips as it possibly can — the roadmap that the machine takes to get to that endpoint still matters. If humans reward correct outputs achieved via problematic means enough over time, we might accidentally train AI tools in ways that may royally backfire.
But Hinton’s fears aren’t limited to questions of world domination. Per CBS, he also worries about the impacts of how humans may use AI, from utilizing autonomous AI weaponry and warbots, to replacing human workers with AI en masse, to a destabilizing proliferation of AI-spun misinformation, among other anxieties. It’s a grim outlook, surely, but as this great unknown unfolds, the potential threats — especially where they pertain to human misuse, versus opaque fears of machine-over-human triumph — continue to be worth considering.
“There’s enormous uncertainty,” Hinton told Pelley, “about what’s going to happen next.”
More on AI: Bing Chat Will Help with Fraud If You Tug Its Heartstrings about Your Dead Grandma
Share This Article