Anthropic has consistently painted itself as the ultra-responsible good guy on the frontier of AI development. The group was founded by defectors from OpenAI, and its CEO Dario Amodei recently said that the goal was to put “positive pressure on this industry to always do the right thing for our users.”
Far be it from us to doubt his intentions, but fragments of reality drifted into the picture in a newly published interview with Amodei by Time, which just named Anthropic one of its 100 Companies of the Year. The entire thing is worth a read, but the standout detail is probably that in Amodei’s office hangs — and no, we are not joking — a framed meme of a “giant robot ransacking a burning city.”
“Underneath, the image’s tongue-in-cheek title: ‘Deep learning is hitting a wall,'” Time elaborates. “That’s a refrain you often hear from AI skeptics, who claim that rapid progress in artificial intelligence will soon taper off. But in the image, an arrow points to the robot, labeling it ‘deep learning.’ Another points to the devastated city: ‘wall.'”
On a literal level, we’re hopefully all in agreement that giant robots ransacking cities would be a bad thing. But thematically, the framed meme resonates deeply with Amodei’s thoughts in the Time interview, where he tries his best to maintain Anthropic’s “good guy” image while making space for financial pressures, the dangers of the tech, and the fact that while Anthropic isn’t operating on the epic scale of OpenAI quite yet, it’s already one of the better-funded players in the space.
Take this response, to a question about Anthropic’s company culture — a thoughtful answer, for sure, but a very carefully worded one in terms of nailing down any specific obligations the company has for the safety and wellbeing of the public who are going to deal with whatever AI tech it creates.
“In terms of the safety side of things, there’s a little bit of a delta between the public perception and what we have in mind. I think of us less as an AI safety company, and more think of us as a company that’s focused on public benefit,” Amodei said. “We’re not a company that believes a certain set of things about the dangers that AI systems are going to have. That’s an empirical question. I more want Anthropic to be a company where everyone is thinking about the public purpose, rather than a one-issue company that’s focused on AI safety or the misalignment of AI systems. Internally I think we’ve succeeded at that, where we have people with a bunch of different perspectives, but what they share is a real commitment to the public purpose.”
The exec took even more of a dodge on a question about what’ll happen if the unpredictable Donald Trump wins the presidential election in November, saying only that “whoever the next president is, we’re going to work with them to do the best we can.”
He wasn’t always entirely evasive. Perhaps Amodei’s most revealing answer was when he more or less spelled out the reality: that even AI researchers who want to take things slowly and methodically are operating in a world where their competitors are pushing ahead as fast as possible.
“I’d prefer to live in an ideal world,” Amodei said. “Unfortunately in the world we actually live in, there’s a lot of economic pressure, there’s not only competition between companies, there’s competition between nations. But if we can really demonstrate that the risks are real, which I hope to be able to do, then there may be moments when we can really get the world to stop and consider for at least a little bit. And do this right in a cooperative way. But I’m not naive — those moments are rare, and halting a very powerful economic train is something that can only be done for a short period of time in extraordinary circumstances.”
And as far as preventing the threat of that rhetorical giant robot goes, there was even one fascinating moment in which Amodei suggested that he actually hopes AI research will imminently hit the wall the meme is riffing on.
“Every time we train a new model, I look at it and I’m always wondering — I’m never sure in relief or concern — [if] at some point we’ll see, oh man, the model doesn’t get any better,” he said. “I think if [the effects of scaling] did stop, in some ways that would be good for the world. It would restrain everyone at the same time. But it’s not something we get to choose — and of course the models bring many exciting benefits. Mostly, it’s a fact of nature. We don’t get to choose, we just get to find out which world we live in, and then deal with it as best we can.”
More on Anthropic: Anthropic CEO Says That by Next Year, AI Models Could Be Able to “Replicate and Survive in the Wild“
Share This Article