
Anthropic’s chief scientist Jared Kaplan is making some grave predictions about humanity’s future with AI.
The choice is ours, in his framing. For now, our fates are mostly in our hands, according to Kaplan — unless we decide to pass the proverbial baton to the machines, that is.
Such a point is fast approaching, he says in a new interview with The Guardian. By 2030, Kaplan predicts, or as soon as 2027, humanity will have to decide whether to take the “ultimate risk” of letting AI models train themselves. The ensuing “intelligence explosion” could elevate the tech to new heights, birthing a so-called artificial general intelligence (AGI) which equals or surpasses human intellect and benefits humankind with all sorts of scientific and medical advancements. Or it could allow AI’s power to snowball beyond our control, leaving us at the mercy of its whims.
“It sounds like a kind of scary process,” he told the newspaper. “You don’t know where you end up.”
Kaplan is one of many prominent figures in AI warning about the field’s potentially disastrous consequences. Geoffrey Hinton, one of the three so-called godfathers of AI, famously declared he regretted his life’s work, and has frequently warned about how AI could upend or even destroy society. OpenAI Sam Altman predicts that AI will will wipe out entire categories of labor. Kaplan’s boss, CEO Dario Amodei, recently warned AI could take over half of all entry-level white-collar jobs, and accused his competitors of “sugarcoating” just how badly AI will disrupt society.
It sounds like Kaplan agrees with his boss’s jobs assessment. AI will be able to do “most white-collar work” in two to three years, he said in the interview. And while’s he’s optimistic we’ll be able to keep AIs aligned to human interests, he’s also worried about the prospect of allowing powerful AI to train other AIs, a “an extremely high-stakes decision” we’ll have to make in the near future.
“That’s the thing that we view as maybe the biggest decision or scariest thing to do… once no one’s involved in the process, you don’t really know,” he told The Guardian. “One is do you lose control over it? Do you even know what the AIs are doing?”
To an extent, larger AI models are already used to train smaller AI models in a process called distillation, which allows the smaller AI to essentially catch up with its larger teacher. Kaplan, however, is worried about what’s termed recursive self-improvement, in which the AIs learn without human intervention and make substantial leaps in their capabilities.
Whether we allow that to happen comes down to some heavy philosophical questions about the tech.
“The main question there is: are the AIs good for humanity?” Kaplan said. “Are they helpful? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?”
While AI’s dangers are real, Kaplan’s warnings warrant some careful unpacking. For one, they uphold the premise is that AI is already some of the most consequential and important tech ever made, regardless of whether existing AI systems represent the powerful autonomous machines warned of in so many a cautionary sci-fi tale — or are at least a meaningful stepping stone to getting there. The adage goes that there’s no such thing as bad publicity, and you can add that doomsaying, especially in the AI industry, is its own form of hype. Visions of apocalypse distract from AI’s more mundane consequences, like its staggering environmental toll, its flaunting of copyright laws, and its addictive, delusion-inducing cognitive effects.
Moreover, many AI experts, including some of the field’s foundational figures like Yann LeCun, don’t believe that the LLM architecture that underpins AI chatbots are capable of blossoming into the all-powerful, intelligent systems that figures like Kaplan are so worried about. It’s not even clear if AI is actually increasing productivity in the workplace, with some research suggesting the opposite — joining several notable attempts of bosses replacing their workers with AI agents but then rehiring them once the tools fail.
Kaplan conceded it’s possible that AI’s capabilities could stagnate. “Maybe the best AI ever is the AI that we have right now,” he mused. “But we really don’t think that’s the case. We think it’s going to keep getting better.”
More on AI: Google CEO Says We’re All Going to Have to Suffer Through It as AI Puts Society Through the Woodchipper