Anthropic’s Mike Krieger wants to build AI products that are worth the hype

Anthropic’s new chief product officer on the promise and limits of chatbots like Claude and what’s next for generative AI.

Share this story

A photo illustration of Anthropic chief product officer Mike Krieger

Photo illustration by The Verge / Photo: Anthropic

Today, I’m talking with Mike Krieger, the new chief product officer at Anthropic, one of the hottest AI companies in the industry.

Anthropic was started in 2021 by former OpenAI executives and researchers who set out to build a more safety-minded AI company — a real theme among ex-OpenAI employees lately. Anthropic’s main product right now is Claude, the name of both its industry-leading AI model and a chatbot that competes with ChatGPT. 

Anthropic has billions in funding from some of the biggest names in tech, primarily Amazon. At the same time, Anthropic has an intense safety culture that’s distinct among the big AI firms of today. The company is notable for employing some people who legitimately worry AI might destroy mankind, and I wanted to know all about how that tension plays out in product design.

On top of that, Mike has a pretty fascinating résumé: longtime tech fans likely know Mike as the cofounder of Instagram, a company he started with Kevin Systrom before selling it to Facebook — now, Meta — for $1 billion back in 2012. That was an eye-popping amount back then, and the deal turned Mike into founder royalty basically overnight.

He left Meta in 2018, and a few years later, he started to dabble in AI — but not quite the type of AI we now talk about all the time on Decoder. Instead, Mike and Kevin launched Artifact, an AI-powered news reader that did some very interesting things with recommendation algorithms and aggregation. Ultimately, it didn’t take off like they hoped. Mike and Kevin shut it down earlier this year and sold the underlying tech to Yahoo

I was a big fan of Artifact, so I wanted to know more about the decision to shut it down as well as the decision to sell it to Yahoo. Then I wanted to know why Mike decided to join Anthropic and work in AI, an industry with a lot of investment but very few consumer products to justify it. What’s this all for? What products does Mike see in the future that make all the AI turmoil worth it, and how is he thinking about building them?

I’ve always enjoyed talking product with Mike, and this conversation was no different, even if I’m still not sure anyone’s really described what the future of this space looks like.

Okay, Anthropic chief product officer Mike Krieger. Here we go.

This transcript has been lightly edited for length and clarity. 

Mike Krieger, you are the new chief product officer at Anthropic. Welcome to Decoder.

Thank you so much. It’s great to be here. It’s great to see you.

I’m excited to talk to you about products. The last time I talked to you, I was trying to convince you to come to the Code Conference. I didn’t actually get to interview you at Code, but I was trying to convince you to come. I said, “I just want to talk about products with someone as opposed to regulation,” and you’re like, “Yes, here’s my product.”

To warn the audience: we’re definitely going to talk a little bit about AI regulation. It’s going to happen. It seems like it’s part of the puzzle, but you’re building the actual products, and I have a lot of questions about what those products could be, what the products are now, and where they’re going. 

I want to start at the beginning of your Anthropic story, which is also the end of your Artifact story. So people know, you started at Instagram, and you were at Meta for a while. Then you left Meta and you and [Instagram cofounder] Kevin Systrom started Artifact, which was a really fun news reader and had some really interesting ideas about how to surface the web and have comments and all that, and then you decided to shut it down. I think of the show as a show for builders, and we don’t often talk about shutting things down. Walk me through that decision, because it’s as important as starting things up sometimes.

It really is, and the feedback we’ve gotten post-shutdown for Artifact was some mixture of sadness but also kudos for calling it. I think that there’s value in having a moment where you say, “We’ve seen enough here.” It was the product I still love and miss, and in fact, I will run into people and I’ll expect them to say, “I love Instagram or I love Anthropic.” They’re always like, “Artifact… I really miss Artifact.” So clearly, it had a resonance with a too small but very passionate group of folks. We’d been working on the full run of it for about three years, and the product had been out for a year. We were looking at the metrics, looking at growth, looking at what we had done, and we had a moment where we said, “Are there ideas or product directions that will feel dumb if we don’t try before we call it?” 

We had a list of those, and that was kind of mid-last year. We basically took the rest of the year to work through those and said, “Yeah, those move the needle a little bit,” but it wasn’t enough to convince us that this was really on track to be something that we were collectively going to spend a lot of time on over the coming years. That was the right moment to say, “All right, let’s pause. Let’s step back. Is this the right time to shut it down?” The answer was yes.

Actually, if you haven’t seen it, Yahoo basically bought it, took all the code, and redid Yahoo News as Artifact, or the other way around. It’s very funny. You’ll have a little bit of a Bizarro World moment the first time you see it. You’re like, “This is almost exactly like Artifact: a little bit more purple, some different sources.” 

It was definitely the right decision, and you know it’s a good decision when you step back and the thing you regret is that it didn’t work out, not that you had to make that decision or that you made that exact decision at the time that you did.

There are two things about Artifact I want to ask about, and I definitely want to ask about what it’s like to sell something to Yahoo in 2024, which is unusual. The first is that Artifact was very much designed to surface webpages. It was predicated on a very rich web, and if there’s one thing I’m worried about in the age of AI, it’s that the web is getting less rich.

More and more things are moving to closed platforms. More and more creators want to start something new, but they end up on YouTube or TikTok or… I don’t know if there are dedicated Threads creators yet, but they’re coming. It seemed like that product was chasing a dream that might be under pressure from AI specifically, but also just the rise of creator platforms more broadly. Was that a real problem, or is that just something I saw from the outside?

I would agree with the assessment but maybe see different root causes. I think what we saw was that some sites were able to balance a mix of subscription, tasteful ads, and good content. I would put The Verge at the top of that list. I’m not just saying that since I’m talking to you. Legitimately, every time we linked to a Verge story from Artifact, somebody clicked through. It was like, “This is a good experience. It feels like things are in balance.” At the extremes, though, like local news, a lot of those websites for economic reasons have become sort of like, you arrive and there’s a sign-in with Google and a pop-up to sign up to the newsletter before you’ve even consumed any content. That’s probably a longer-run economic question of supporting local news, probably more so than AI. At least that trend seems like it’s been happening for quite a while.

The creator piece is also really interesting. If you look at where things that are breaking news or at least emerging stories are happening, it’s often an X post that went viral. What we would often get on Artifact is the summary roundup of the reactions to the thing that happened yesterday, which, if you’re relying on that, you’re a little bit out of the loop already.

When I look at where things are happening and where the conversation is happening, at least for the cultural core piece of that conversation, it’s often not happening anymore on media properties. It is starting somewhere else and then getting aggregated elsewhere, and I think that just has an implication on a site or a product like Artifact and how well you’re ever going to feel like this is breaking news. Over time, we moved to be more interest-based and less breaking news, which, funny enough, Instagram at its heart was also very interest-based. But can you have a product that is just that? I think that was the struggle.

You said media properties. Some media properties have apps. Some are expressed only as newsletters. But I think what I’m asking about is the web. This is just me doing therapy about the web. What I’m worried about is the web. The creators aren’t on the web. We’re not making websites, and Artifact was predicated on there being a rich web. Search products in general are sort of predicated on there being a rich and searchable web that will deliver good answers. 

To some extent, AI products require there to be a new web because that’s where we’re training all our models. Did you see that — that this promise of the web is under pressure? If all the news is breaking on a closed platform you can’t search or index, like TikTok or X, then actually building products on the web might be getting more constrained and might not be a good idea anymore.

Even citing newsletters is a great example. Sometimes there’s an equivalent Substack site of some of the best stuff that I read, and some of the newsletters exist purely in email. We even set up an email account that just ingested newsletters to try to surface them or at least surface links from them, and the design experience is not there. The thing I’ve noticed on the open web in general and as a longtime fan of the web — somebody who was very online before being online was a thing that people were as a preteen back in Brazil — is that, in a lot of ways, the incentives have been set up around, “Well, a recipe won’t rank highly if it’s just a recipe. Let’s tell the story about the life that happened leading up to that recipe.” 

Those trends have been happening for a while and are already leading to a place where the end consumer might be a user, but it is being intermediated through a search engine and optimized for that findability or optimized for what’s going to get shared a bunch or get the most attention. Newsletters and podcasts are two ways that have probably most successfully broken through that, and I think that’s been an interesting direction.

But in general, I feel like there’s been a decadelong risk for the open web in terms of the intermediation happening between someone trying to tell a story and someone else receiving that story. All the roadblocks along the way just make that more and more painful. It’s no surprise then that, “Hey, I can actually just open my email and get the content,” feels better in some ways, although it’s also not great in a bunch of other ways. That’s how I’ve watched it, and I would say it’s not in a healthy place where it is now.

The way that we talk about that thesis on Decoder most often is that people build media products for the distribution. Podcasts famously have open distribution; it’s just an RSS feed. Well, it’s like an RSS feed but there’s Spotify’s ad server in the middle. I’m sorry to everybody who gets whatever ads that we put in here. But at its core, it’s still an RSS product. 

Newsletters are still, at their core, an IMAP product, an open-mail protocol product. The web is search distribution, so we’ve optimized it to that one thing. And the reason I’m asking this, and I’m going to come back to this theme a few times, is that it felt like Artifact was trying to build a new kind of distribution, but the product it was trying to distribute was webpages, which were already overtly optimized for something else.

I think that’s a really interesting assessment. It’s funny watching the Yahoo version of it because they’ve done the content deals to get the more slimmed-down pages, and though they have fewer content sources, the experience of tapping on each individual story, I think, is a lot better because those have been formatted for a distribution that is linked to some paid acquisition, which is different from what we were doing, which was like, “Here’s the open web. We’ll give you warts and all and link directly to you.” But I think your assessment feels right.

Okay, so that’s one. I want to come back to that theme. I really wanted to start with Artifact in that way because it feels like you had an experience in one version of the internet that is maybe under pressure. The other thing I wanted to ask about Artifact is that you and Kevin, your cofounder, both once told me that you had big ideas, like scale ideas, for Artifact. You wouldn’t tell me what it was at the time. It’s over now. What was it?

There were two things that I remained sad that we didn’t get to see through. One was the idea of good recommender systems underlying multiple product verticals. So news stories being one of them, but I had the belief that if the system understands you well through how you’re interacting with news stories, how you’re interacting with content, then is there another vertical that could be interesting? Is it around shopping? Is it around local discovery? Is it around people discovery? All these different places. I’ll separate maybe machine learning and AI, and I realize that’s a shifting definition throughout the years, but let’s call it, for the purposes of our conversation, recommender systems or machine learning systems — for all their promise, my day-to-day is actually not filled with too many good instances of that product.

The big company idea was, can we bring Instagram-type product thinking to recommender systems and combine those two things in a way that creates new experiences that aren’t beholden to your existing friend and follow graph? With news being an interesting place to start, you highlight some good problems about the content, but the appealing part was that we were not trying to solve the two-sided marketplace all at once. It turns out, half that marketplace was already search-pilled and had its own problems, but at least there was the other side as well. The other piece, even within news, is really thinking about how you eventually open this up so creators can actually be writing content and understanding distribution natively on the platform. I think Substack is pursuing this from a very different direction. It feels like every platform eventually wants to get to this as well.

When you watch the closest analogs in China, like Toutiao, they started very much with crawling the web and having these eventual publisher deals, and now it is, I would guess, 80 to 90 percent first-party content. There are economic reasons why that’s nice and some people make their living writing articles about local news stories on Toutiao, including a sister or close family member of one of our engineers. But the other side of it is that content can be so much more optimized for what you’re doing. 

Actually, at Code, I met an entrepreneur who was creating a new novel media experience that was similar to if Stories met news, met mobile, what would it be for most news stories? I think for something like that to succeed, it also needs distribution that has that as the native distribution type. So the two ideas where I’m like, “one day somebody [will do this]” are recommendation systems for everything and then primarily a recommendation-based first-party content writing platform. 

All right, last Artifact question. You shut it down and then there was a wave of interest, and then publicly, one of you said, “Oh, there’s a wave of interest, we might flip it,” and then it was Yahoo. Tell me about that process.

There were a few things that we wanted to align. We’d worked in that space for long enough that whatever we did, we sort of wanted to tie a bow around it and move on to whatever it was next. That was one piece. The other piece was that I wanted to see the ideas live on in some way. There were a lot of conversations around, “Well, what would it become?” And the Yahoo one was really interesting, and I would admit to being pretty unaware of what they were doing beyond that I was still using Yahoo Finance in my fantasy football league. Beyond that, I was not familiar with what they were doing. And they were like, “We want to take it, and we think in two months, we can relaunch it as Yahoo News.”

I was thinking, “That sounds pretty crazy. That’s a very short timeline in a code base you’re not familiar with.” They had access to us and we were helping them out almost full time, but that’s still a lot. But they actually pretty much pulled it off. I think it was 10 weeks instead of eight weeks. But I think there is a newfound energy in there to be like, “All right, what are the properties we want to build back up again?” I fully admit coming in with a bit of a bias. Like, I don’t know what’s left at Yahoo or what’s going to happen here. And then the tech teams bit into it with an open mouth. They went all in and they got it shipped. I’ll routinely text Justin [Bisignano], who was our Android lead and is at Anthropic now. I’ll find little details in Yahoo News, and I’m like, “Oh, they kept that.”

I spent a lot of time with this 3D spinning animation when you got to a new reading level — it’s this beautiful reflection specular highlighting thing. They kept it, but now it goes, “Yahoo,” when you do it. And I was like, “That’s pretty on brand.” It was a really fascinating experience, but it gets to live on, and it will probably have a very different future than what we were envisioning. I think some of the core ideas are there around like, “Hey, what would it mean to actually try to create a personalized news system that was really decoupled from any kind of existing follow graph or what you were seeing already on something like Facebook?”

Were they the best bidder? Was the decision that Yahoo will deploy this to the most people at scale? Was it, “They’re offering us the most money”? How did you choose?

It was an optimization function, and I would say the three variables were: the deal was attractive or attractive enough; our personal commitments post-transition were pretty light, which I liked; and they had reach. Yahoo News I think has a hundred million monthly users still. So it was reach, minimal commitment but enough that we felt like it could be successful, and then they were in the right space at least on the bid size.

It sounds like the dream. “You can just have this. I’m going to walk away. It’s a bunch of money.” Makes sense. I was just wondering if that was it or whether it wasn’t as much money but that they had the biggest platform, because Yahoo is deceptively huge.

Yeah, it’s deceptively still huge and under new leadership, with a lot of excitement there. It was not a huge exit or I would not call it a super successful outcome, but the fact that I feel like that chapter closed in a nice way and then we could move on without wondering if we should have done something different when we closed it meant that I slept much better at night in Q1 of this year.

So that’s that chapter. The next chapter is when you show up as the chief product officer at Anthropic. What was that conversation like? Because in terms of big commitments and hairy problems — are we going to destroy the web? — it’s all right there, and maybe it’s a lot more work. How’d you make the decision to go to Anthropic?

The top-level decision was what to do next. And I admit to having a bit of an identity crisis at the beginning of the year. I was like, “I only really know how to start companies.” And actually, more specifically, I probably only know how to start companies with Kevin. We make a very good cofounder pair. 

I was looking at it like what are the aspects of that that I like? I like knowing the team from day one. I like having a lot of autonomy. I like having partners that I really trust. I like working on big problems with a lot of open space. At the same time, I said, “I do not want to start another company right now. I just went through the wringer on that for three years. It had an okay outcome, but it wasn’t the outcome we wanted.” I sat there saying, “I want to work on interesting problems at scale at a company that I started, but I don’t want to start a company.”

I kind of swirled a bit, and I was like, “What do I do next?” I definitely knew I did not want to just invest. Not that investing is a “just” thing, but it’s different. I’m a builder at heart, as you all know. I thought, “This is going to be really hard. Maybe I need to take some time and then start a company.” And then I got introduced to the Anthropic folks via the head of design, who’s somebody I actually built my very first iPhone app with in college. I’ve known him for a long time. His name is Joel [Lewenstein].

I started talking to the team and realized the research team here is incredible, but the product efforts were so nascent. I wasn’t going to kid myself that I was coming in as a cofounder. The company has been around for a couple of years. There were already company values and a way things were working. They called themselves ants. Maybe I would have advocated for a different employee nickname, but it’s fine. That ship has sailed. But I felt like there was a lot of product greenfield here and a lot of things to be done and built.

It was the closest combination I could have imagined to 1) the team I would’ve wanted to have built had I been starting a company; 2) enough to do — so much to do that I wake up every day both excited and daunted by how much there is to do; and 3) already momentum and scale so I could feel like I was going to hit the ground running on something that had a bit of tailwind. That was the combination.

So the first one was the big decision: what do I do next? And then the second one was like, “All right, is Anthropic the right place for it?” It was the sort of thing where every single conversation I had with them, I’d be like, “I think this could be it.” I wasn’t thinking about joining a company that was already running like crazy, but I wanted to be closer to the core AI tech. I wanted to be working on interesting problems. I wanted to be building, but I wanted it to feel as close-ish to a cofounder kind of situation as I could.

Daniela [Amodei], who is the president here, maybe she was trying to sell me, but she said, “You feel like the eighth cofounder that we never had, and that was our product cofounder,” which is amazing that they had seven cofounders and none of them were the product cofounder. But whatever it was, it sold me, and I was like, “All right, I’m going to jump back in.”

I’m excited for the inevitable Beatles documentaries about how you’re the fifth Beatle, and then we can argue about that forever.

The Pete Best event. I hope not. I’m at least the Ringo that comes in later.

In 2024, with our audience as young as it is, that might be a deep cut, but I encourage everybody to go search for Pete Best and how much of an argument that is.

Let me ask you two big-picture questions about working in AI generally. You started at Instagram, you’re deep with creatives, you built a platform of creatives, and you obviously care about design. Within that community, AI is a moral dilemma. People are upset about it. I’m sure they’ll be upset that I even talked to you. 

We had the CEO of Adobe on to talk about Firefly, and that led to some of the most upset emails we’ve ever gotten. How did you evaluate that? “I’m going to go work in this technology that is built on training against all this stuff on the internet, and people have really hot emotions about that.” There’s a lot to it. There are copyright lawsuits. How did you think about that?

I have some of these conversations. One of my good friends is a musician down in LA. He comes up to the Bay whenever he’s on tour, and we’ll have one-hour conversations over pupusas about AI in music and how these things connect and where these things go. He always has interesting insights on what parts of the creative process or which pieces of creative output are most affected right now, and then you can play that out and see how that’s going to change. I think that question is a big part of why I ended up at Anthropic, if I was going to be in AI.

Obviously the written word is really important, and there’s so much that happens in text. I definitely do not mean to make this sound like text is less creative than other things. But I think the fact that we’ve chosen to really focus on text and image understanding and keep it to text out — and text out that is supposed to be something that is tailored to you rather than reproducing something that’s already out there — reduces some of that space significantly where you’re not also trying to produce Hollywood-type videos or high-fidelity images or sounds and music. 

Some of that is a research focus. Some of that is a product focus. The space of thorny questions is still there but also a bit more limited in those domains, or it’s outside of those domains and more purely on text and code and those kinds of expressions. So that was a strong contributor to me wanting to be here versus other spots.

There’s so much controversy about where the training data comes from. Where does Anthropic’s training data for Claude come from? Is it scraped from the web like everybody else?

[It comes from] scraping the web. We respect robots.txt. We have a few other data sources that we license and work with folks separately for that. Let’s say the majority of it is web crawl done in a web crawl respectful way.

Were you respecting robots.txt before everyone realized that you had to start respecting robots.txt?

We were respecting robots.txt beforehand. And then, in the cases where it wasn’t getting picked up correctly for whatever reason, we’ve since corrected that as well.

What about YouTube? Instagram? Are you scraping those sites?

No. When I think about the players in this space, there are times when I’m like, “Oh, it must be nice to be inside Meta.” I don’t actually know if they train on Instagram content or if they talk about that, but there’s a lot of good stuff in there. And same with YouTube. I mean, a close friend of mine is at YouTube. That’s the repository of collective knowledge of how to fix any dishwasher in the world, and people ask that kind of stuff. So we’ll see over time what those end up looking like.

You don’t have a spare key to the Meta data center or the Instagram server?

[Laughs] I know, I dropped it on the way out.

When you think about that general dynamic, there are a lot of creatives out there who perceive AI to be a risk to their jobs or perceive that there’s been a big theft. I’ll just ask about the lawsuit against Anthropic. It’s a bunch of authors who say that Claude has illegally trained against their books. Do you think there’s a product answer to this? This is going to lead into my second question, but I’ll just ask broadly, do you think you can make a product so good that people overcome these objections?

Because that is kind of the vague argument I hear from the industry. Right now, we’re seeing a bunch of chatbots and you can make the chatbot fire off a bunch of copyrighted information, but there’s going to come a turn when that goes away because the product will be so good and so useful that people we’ll think it has been worth it. I don’t see that yet. I think a lot of the heart of the copyright lawsuits beyond just the legal piece of it is that the tools are not so useful that anyone can see that the trade is worth it. Do you think there’s going to be a product where it is obvious that the trade is worth it?

I think it’s very use case dependent. The kind of question that we drove our Instagram team insane with is we would always ask them, “Well, what problem are you solving?” A general text bot interface that can answer any question is a technology and the beginnings of a product, but it’s not a precise problem that you are solving. Grounding yourself in that maybe helps you get to that answer. For example, I use Claude all the time for code assistance. That is solving a direct problem, which is, I’m trying to ramp up on product management and get our products underway and also work on a bunch of different things. To the extent that I have any time to be in pure build mode, I want to be really efficient. That is a very directly connected problem and a total game-changer just through myself as a builder, and it lets me focus on different pieces as well.

I was talking to somebody right before this call. They are now using Claude to soften up or otherwise change their long missives on Slack before they send them. This kind of editor solves their immediate problem. Maybe they need to tone it down and chill out a little bit before sending a Slack message. Again, that grounds it in use because that’s what I’m trying to really focus on. If you try to boil the ocean, I think you end up really adjacent to these kinds of ethical questions that you raise. If you’re an “anything box,” then everything is potentially either under threat or problematic. I think there’s real value in saying, “All right, what are the things we want to be known to be good for?”

I’d argue today that the product actually does serve some of those well enough that I’m happy it exists and I think folks are in general. And then, over time, if you look at things like writing assistance more broadly for novel-length writing, I think the jury’s still out. My wife was doing kind of a prototype version of that. I’ve talked to other folks. Our models are quite good, but they’re not great at keeping track of characters over book-length pieces or reproducing particular things. I would ground that in “what can we be good at now?” and then let’s, as we move into new use cases, navigate those carefully in terms of who is actually using it and make sure we’re providing value to the right folks in that exchange.

Let me ground that question in a more specific example, both in order to ask you a more specific question and also to calm the people who are already drafting me angry emails.

TikTok exists. TikTok is maybe the purest garden of innovative copyright infringement that the world has ever created. I’ve watched entire movies on TikTok, and it’s just because people have found ways to bypass their content filters. I do not perceive the same outrage at TikTok for copyright infringement as I do with AI. Maybe someone is really mad. I have watched entire 1980s episodes of This Old House on TikTok accounts that are labeled, “Best of This Old House.” I don’t think Bob Vila is getting royalties for that, but it seems to be fine because TikTok, as a whole, has so much utility, and people perceive even the utility of watching old 1980s episodes of This Old House.

There’s something about that dynamic between “this platform is going to be loaded full of other people’s work” and “we’re going to get value from it” that seems to be rooted in the fact that, mostly, I’m looking at the actual work. I’m not looking at some 15th derivative of This Old House as expressed by an AI chatbot. I’m actually just looking at a 1980s version of This Old House. Do you think that AI chatbots can ever get to a place where it feels like that? Where I’m actually looking at the work or I’m providing my attention or time or money to the actual person who made the underlying work, as opposed to, “We trained it on the open internet and now we’re charging you $20, and 15 steps back, that person gets nothing.”

To ground it in the TikTok example as well, I think there’s also an aspect where if you imagine the future of TikTok, most people probably say, “Well, maybe they’ll add more features and I’ll use it even more.” I don’t know what the average time spent on it is. It definitely eclipses what we ever had on Instagram. 

That’s terrifying. That’s the end of the economy.

Exactly. “Build AGI, create universal prosperity so we can spend time on TikTok” would not be my preferred future outcome, but I guess you could construct that if you wanted to. I think the future feels, I would argue, a bit more knowable in the TikTok use case. In the AI use case, it’s a bit more like, “Well, where does this accelerate to? Where does this eventually complement me, and where does it supersede me?” I would posit that a lot of the AI-related anxiety can be tied to the fact that this technology was radically different three or four years ago.

Three or four years ago, TikTok existed, and it was already on that trajectory. Even if it weren’t there, you could have imagined it from where YouTube and Instagram were. If they had an interesting baby with Vine, it might’ve created TikTok. It is partially because the platform is so entertaining; I think that’s a piece. That connection to real people is an interesting one, and I’d love to spend more time on that because I think that’s an interesting piece of the AI ecosystem. Then the last piece is just the knowability of where it goes. Those are probably the three [elements] that ground it more. 

Anthropic started, it was probably the original “we’re all quitting OpenAI to build a safer AI” company. Now there are a lot of them. My friend Casey [Newton] makes a joke that every week someone quits to start yet another safer AI company. Is that expressed in the company? Obviously Instagram had big moderation policies. You thought about it a lot. It is not perfect as a platform or a company, but it’s certainly at the core of the platform. Is that at the core of Anthropic in the same way that there are things you will not do?

Yes, deeply. And I saw it in week two. So I’m a ship-oriented person. Even with Instagram’s early days, it was like, “Let’s not get bogged down in building 50 features. Let’s build two things well and get it out as soon as possible.” Some of those decisions to ship a week earlier and not have every feature were actually existential to the company. I feel that in my bones. So week two, I was here. Our research team put out a paper on interpretability of our models, and buried in the paper was this idea that they found a feature inside one of the models that if amplified would make Claude believe it was the Golden Gate Bridge. Not just kind of believe it, like, as if it were prompted, “Hey, you’re the Golden Gate Bridge.” [It would believe it] deeply — in the way that my five-year-old will make everything about turtles, Claude made everything about the Golden Gate Bridge.

“How are you today?” “I’m feeling great. I’m feeling International Orange and I’m feeling in the foggy clouds of San Francisco.” Somebody in our Slack was like, “Hey, should we build and release Golden Gate Claude?” It was almost an offhand comment. A few of us were like, “Absolutely yes.” I think it was for two reasons. One, this was actually quite fun, but two, we thought it was valuable to get people to have some firsthand contact with a model that has had some of its parameters tuned. From that IRC message to having Golden Gate Claude out on the website was basically 24 hours. In that time, we had to do some product engineering, some model work, but we also ran through a whole battery of safety evals. 

That was an interesting piece where you can move quickly, and not every time can you do a 24-hour safety evaluation. There are lengthier ones for new models. This one was a derivative, so it was easier, but the fact that that wasn’t even a question, like, “Wait, should we run safety evals?” Absolutely. That’s what we do before we launch models, and we make sure that it’s both safe from the things that we know about and also model out what some novel harms are. The bridge is unfortunately associated with suicides. Let’s make sure that the model doesn’t guide people in that direction, and if it does, let’s put in the right safeguards. Golden Gate Claude is a trivial example because it was like an Easter egg we shipped for basically two days and then wound down. But [safety] was very much at its core there.

Even as we prepare model launches, I have urgency: “Let’s get it out. I want to see people use it.” Then you actually do the timeline, and you’re like, “Well, from the point where the model is ready to the point where it’s released, there are things that we are going to want to do to make sure that we’re in line with our responsible scaling policy.” I appreciate that about the product and the research teams here that it’s not seen as, “Oh, that’s standing in our way.” It’s like, “Yeah, that’s why this company exists.” I don’t know if I should share this, but I’ll share it anyway. At our second all-hands meeting since I was here, somebody who joined very early here stood up and said, “If we succeeded at our mission but the company failed, I would see this as a good outcome.” 

I don’t think you would hear that elsewhere. You definitely would not hear that at Instagram. If we succeeded in helping people see the world in a more beautiful, visual way, but the company failed, I’d be super bummed. I think a lot of people here would be very bummed, too, but that ethos is quite unique.

This brings me to the Decoder questions. Anthropic is what’s called a public benefit corporation. There’s a trust underlying it. You are the first head of product. You’ve described the product and research teams as being different, then there’s a safety culture. How does that all work? How is Anthropic structured?

I would say, broadly, we have our research teams. We have the team that sits most closely between research and product, which is a team thinking about inference and model delivery and everything that it takes to actually serve these models because that ends up being the most complex part in a lot of cases. And then we have product. If you sliced off the product team, it would look similar to product teams at most tech companies, with a couple of tweaks. One is that we have a labs team, and the purpose of that team is to basically stick them in as early in the research process as possible with designers and engineers to start prototyping at the source, rather than waiting until the research is done. I can go into why I think that’s a good idea. That’s a team that got spun up right after I joined.

Then the other team we have is our research PM teams, because ultimately we’re delivering the models using these different services and the models have capabilities, like what they can see well in terms of multimodal or what type of text they understand and even what languages they need to be good at. Having end-user feedback tied all the way back to research ends up being very important, and it prevents it from ever becoming this ivory tower, like, “We built this model, but is it actually useful?” We say we’re good at code. Are we really? Are startups that are using it for code giving us feedback on, “Oh, it’s good at these Python use cases, but it’s not good at this autonomous thing”? Great. That’s feedback that’s going to channel back in. So those are the two distinct pieces. Within product, and I guess a click down, because I know you get really interested on Decoder about team structures, we have apps, just Claude AI, Claude for Work, and then we have Developers, which is the API, and then we have our kooky labs team.

That’s the product side. The research side, is that the side that works on the actual models?

Yeah, that’s the side on the actual models, and that’s everything from researching model architectures, to figuring out how these models scale, and then a strong red teaming safety alignment team as well. That’s another component that is deeply in research, and I think some of the best researchers end up gravitating toward that, as they see that’s the most important thing they could work on.

How big is Anthropic? How many people?

We’re north of 700, at last count.

And what’s the split between that research function and the product function?

Product is just north of 100, so the rest is everything between: we have sales as well, but research, the fine-tuning part of research, inference, and then the safety and scaling pieces as well. I described this within a month of joining as those crabs that have one super big claw. We’re really big on research, and product is still a very small claw. The other metaphor I’ve been using is that, you’re a teenager, and some of your limbs have grown faster than others and some are still catching up.

The crazier bet is that I would love for us to not have to then double the product team. I’d love for us instead to find ways of using Claude to make us more effective at everything we do on product so that we don’t have to double. Every team struggles with this so this is not a novel observation. But I look back at Instagram, and when I left, we had 500 engineers. Were we more productive than at 250? Almost certainly not. Were we more productive than at 125 to 250? Marginally?

I had this really depressing interview once. I was trying to hire a VP of engineering, and I was like, “How do you think about developer efficiency and team growth?” He said, “Well, if every single person I hire is at least net contributing something that’s succeeding, even if it’s like a 1 to 1 ratio…” I thought that was depressing. It creates all this other swirl around team culture, dilution, etc. That’s something I’m personally passionate about. I was like, “How do we take what we know about how these models work and actually make it so the team can stay smaller and more tight-knit?”

Tony Fadell, who did the iPod, he’s been on Decoder before, but when we were starting The Verge, he was basically like, “You’re going to go from 15 or 20 people to 50 or 100 and then nothing will ever be the same.” I’ve thought about that every day since because we’re always right in the middle of that range. And I’m like, when is the tipping point? 

Where does moderation live in the structure? You mentioned safety on the model side, but you’re out in the market building products. You’ve got what sounds like a very horny Golden Gate Bridge people can talk to — sorry, every conversation has one joke about how horny the AI models are.

[Laughs] That is not what that is.

Where does moderation live? At Instagram, there’s the big centralized Meta trust and safety function. At YouTube, it’s in the product org under Neal Mohan. Where does it live for you?

I would broadly put it in three places. One is in the actual model training and fine-tuning, where part of what we do on the reinforcement learning side is saying we’ve defined a constitution for how we think Claude should be in the world. That gets baked into the model itself early. Before you hit the system prompt, before people are interacting with it, that’s getting encoded into how it should behave. Where should it be willing to answer and chime in, and where should it not be? That’s very linked to the responsible scaling piece. Then next is in the actual system prompt. In the spirit of transparency, we just started publishing our system prompts. People would always figure out clever ways to try to reverse them anyway, and we were like, “That’s going to happen. Why don’t we just actually treat it like a changelog?” 

As of this last week, you can go online and see what we’ve changed. That’s another place where there’s additional guidance that we give to the model around how it should act. Of course, ideally, it gets baked in earlier. People can always find ways to try to get around it, but we’re fairly good at preventing jailbreaks. And then the last piece is where our trust and safety team sits, and the trust and safety team is the closest team. At Instagram, we called it at one point trust and safety, at another point, well-being. But it’s that same kind of last-mile remediation. I would bucket that work into two pieces. One is, what are people doing with Claude and publishing out to the world? So with Artifacts, it was the first product we had that had any amount of social thing at all, which is that you could create an Artifact, hit share, and actually put that on the web. That’s a very common problem in shared content.

I lived shared content for almost 10 years at Instagram, and here, it was like, “Wait, do people have usernames? How do they get reported?” We ended up delaying that launch by a week and a half to make sure we had the right trust and safety pieces around moderation, reporting, cues around taking it down, limited distribution, figuring out what it means for the people on teams plans versus individuals, etc. I got very excited, like, “Let’s ship this. Sharing Artifacts.” Then, a week later, “Okay, now we can ship it.” We had to actually sort those things out.

So that’s on the content moderation side. And then, on the response side, we also have additional pieces that sit there that are either around preventing the model from reproducing copyrighted content, which is something that we want to prevent as well from the completions, or other harms that are against the way we think the model should behave and should ideally have been caught earlier. But if they aren’t, then they get caught at that last mile. Our head of trust and safety calls it the Swiss cheese method, which is like, no one layer will catch everything, but ideally, enough layer stack will catch a lot of it before it reaches the end.

I’m very worried about AI-generated fakery across the internet. This morning, I was looking at a Denver Post article about a fake news story about a murder that people were calling The Denver Post to find out why they hadn’t reported on, which is, in its own way, the correct outcome. They heard a fake story; they called a trusted source. 

At the same time, that The Denver Post had to go run down this fake murder true-crime story because an AI generated it and put it on YouTube seems very dangerous to me. There’s the death of the photograph, which we talk about all the time. Are we going to believe what we see anymore? Where do you sit on that? Anthropic is obviously very safety-minded, but we are still generating content that can go haywire in all kinds of ways.

I would maybe split internal to Anthropic and what I’ve seen out in the world. The Grok image generation stuff that came out two weeks ago was fascinating because, at launch, it felt like it was almost a total free-for-all. It’s like, do you want to see Kamala [Harris] with a machine gun? It was crazy stuff. I go between believing that actually having examples like that in the wild is helpful and almost inoculating what you take for granted as a photograph or not or a video or not. I don’t think we’re far from that. And maybe it’s calling The Denver Post or a trusted source, or maybe it’s creating some hierarchy of trust that we can go after. There are no easy answers there, but that’s, not to sound grandiose, a society-wide thing that we’re going to reckon with as well in the image and video pieces.

On text, I think what changes with AI is the mass production. One thing that we look at is any type of coordinated effort. We looked at this as well at Instagram. At individual levels, it might be hard to catch the one person that’s commenting on a Facebook group trying to start some stuff because that’s probably indistinguishable from a human. But what we really looked for were networks of coordinated activity. We’ve been doing the same on the Anthropic side, which is looking at this, which is going to happen more often on the API side rather than on Claude AI. I think there are just more effective, efficient ways of doing things at scale.

But when we see spikes in activity, that’s when we can go in and say, “All right, what does this end up looking like? Let’s go learn more about this particular API customer. Do we need to have a conversation with them? What are they actually doing? What is the use case?” I think it’s important to be clear as a company what you consider bugs versus features. It would be an awful outcome if Anthropic models were being used for any kind of coordination of fake news and election interference-type things. We’ve got the trust and safety teams actively working on that, and to the extent that we find anything, that’ll be a combo — additional model parameters plus trust and safety — to shut it down.

With apologies to my friends at Hard Fork, Casey [Newton] and Kevin [Roose], they ask everybody what their P(doom) is. I’m going to ask you that, but that question is rooted in AGI — what are the chances we think that it will become self-aware and kill us all? Let me ask you a variation of that first, which is, what if all of this just hastens our own information apocalypse and we end up just taking ourselves out? Do we need the AGI to kill us, or are we headed toward an information apocalypse first?

I think the information piece… Just take textual, primarily textual, social media. I think some of that happens on Instagram as well, but it’s easier to disseminate when it’s just a piece of text. That has already been a journey, I would say, in the last 10 years. But I think it comes and goes. I think we go through waves of like, “Oh man. How are we ever going to get to the truth?” And then good truth tellers emerge and I think people flock to them. Some of them are traditional sources of authority and some are just people that have become trusted. We can get into a separate conversation on verification and validation of identity. But I think that’s an interesting one as well.

I’m an optimistic person at heart, if you can’t tell. That part of it is my belief from an information sort of chaos or proliferation piece of our abilities to both learn, adapt, and then grow to ensure the right mechanisms are in place. I remain optimistic that we’ll continue to figure it out on that front. The AI component, I think, increases the volume, and the thing you would have to believe is that it could also increase some of the parsing. There was a William Gibson novel that came out a few years ago that had this concept that, in the future, perhaps you’ll have a social media editor of your own. That gets deployed as a sort of gating function between all the stuff that’s out there and what you end up consuming.

There’s some appeal in that to me, which is, if there’s a massive amount of data to consume, most of it is not going to be useful to you. I’ve even tried to scale back my own information diet to the extent that there are things that are interesting. I’d love the idea of, “Go read this thing in depth. This is worthwhile for you.”

Let me bring this all the way back around. We started talking about recommendation algorithms, and now we’re talking about classifiers and having filters on social media to help you see stuff. You’re on one side of it now. Claude just makes the things and you try not to make bad things. 

The other companies, Google and Meta, are on both sides of the equation. We’re racing forward with Gemini, we’re racing forward with Llama, and then we have to make the filtering systems on the other side to keep the bad stuff out. It feels like those companies are at decided cross purposes with themselves.

I think an interesting question is, and I don’t know what Adam Mosseri would say, what percentage of Instagram content could, would, and should be AI-generated, or at least AI-assisted in a few ways? 

But now, from your seat at Anthropic knowing how the other side works, is there anything you’re doing to make the filtering easier? Is there anything you’re doing to make it more semantic or more understandable? What are you looking at to make it so that the systems that sort the content have an easier job of understanding what’s real and what’s fake?

There’s on the research side, and now outside of my area of expertise. There’s active work on what the techniques are that could make it more detectable. Is it watermarking? Is it probability? I think that’s an open question but also a very active area of research. I think the other piece is… well, actually I would break it down to three. There’s what we can do from detection and watermarking, etc. On the model piece, we also need to have it be able to express some uncertainty a little bit better. “I actually don’t know about this. I’m not willing to speculate or I’m not actually willing to help you filter these things down because I’m not sure. I can’t tell which of these things are true.” That’s also an open area of research and a very interesting one.

And then the last one is, if you’re Meta, if you’re Google, maybe the bull case is that if primarily you’re surfacing content that is generated by models that you yourself are building, there is probably a better closed loop that you can have there. I don’t know if that’s going to play out or whether people will always just flock to whatever the most interesting image generation model is and create it and go publish it and blow that up. I’m not sure. That jury is still out, but I would believe that the built-in tools like Instagram, 90-plus percent of photos that were filtered, were filtered inside the app because it’s most convenient. In that way, a closed ecosystem could be one route to at least having some verifiability of generated content.

Instagram filters are an interesting comparison here. Instagram started as photo sharing. It was Silicon Valley nerds, and then it became Instagram. It is a dominant part of our culture, and the filters had real effects on people’s self-image, had negative effects particularly on teenage girls and how they feel about themselves. There are some studies that say teenage boys are starting to have self-image and body issues at higher rates because of what they perceive on Instagram. That’s bad, and it’s bad weight against the general good of Instagram, which is that many more people get to express themselves. We build different kinds of communities. How are you thinking about those risks with Anthropic’s products? Because you lived it.

I was working with a coach and would always push him like, “Well, I want to start another company that has as much impact as Instagram.” He’s like, “Well, there’s no cosmic ledger where you’ll know exactly what impact you have, first of all, and second of all, what’s the equation by positive or negative?” I think the right way to approach these questions is with humility and then understanding as things develop. But, to me, it was, I’m excited and overall very optimistic about AI and the potential for AI. If I’m going to be actively working on it, I want it to be somewhere where the drawbacks, the risks, and the sort of mitigations were as important and as foundational to the founding story, to bring it back to why I joined. That’s how I balanced it for myself, which is, you need to have that internal run loop of, “Great. Is this the right thing to launch? Should we launch this? Should we change it? Should we add some constraints? Should we explain its limitations?” 

I think it’s essential that we grapple with those questions or else I think you’ll end up saying, “Well, this is clearly just a force for good. Let’s blow it up and go all the way out.” I feel like that misses, having seen it at Instagram. You can build a commenting system, but you also need to build the bullying filter that we built. 

This is the second Decoder question. How do you make decisions? What’s your framework?

I’ll go meta for a quick second, which is that the culture here at Anthropic is extremely thoughtful and very document writing-oriented. If a decision needs to be made, there’s usually a document behind it. There are pros and cons to that. It means that as I joined and was wondering why we chose to do something, people would say, “Oh yeah, there’s a doc for that.” There’s literally a doc for everything, which helped my ramp-up. Sometimes I’d be like, “Why have we still not built this?” People would say, “Oh, somebody wrote a doc about that two months ago.” And I’m like, “Well, did we do anything about it?” My whole decision-making piece is that I want us to get to truth faster. None of us individually knows what’s right, and getting the truth could be derisking the technical side by building a technical prototype.

If it’s on the product side, let’s get it into somebody’s hands. Figma mock-ups are great, but how’s it going to move on the screen? Minimizing time to iteration and time to hypothesis testing is my fundamental decision-making philosophy. I’ve tried to install more of that here on the product side. Again, it’s a thoughtful, very deliberate culture. I don’t want to lose that, but I do want there to be more of this hypothesis testing and validation components. I think people feel that when they’re like, “Oh, we had been debating this for a while, but we actually built it, and it turns out neither of us was right, and actually, there’s a third direction that’s more correct.” At Instagram, we ran the gamut of strategy frameworks. The one that resonated the most with me consistently is playing to win.

I go back to that often, and I’ve instilled some of that here as we start thinking about what our winning aspiration is. What are we going after? And then, more specifically, and we touched on this in our conversation today, where will we play? We’re not the biggest team in size. We’re not the biggest chat UI by usage. We’re not the biggest AI model by usage, either. We’ve got a lot of interesting players in this space. We have to be thoughtful about where we play and where we invest. Then, this morning, I had a meeting where the first 30 minutes were people being in pain due to a strategy. The cliche is strategy should be painful, and people forget the second part of that, which is that you’ll feel pain when the strategy creates some tradeoffs.

What was the tradeoff, and what was the pain?

Without getting too much into the technical details about the next generation of models, what particular optimizations we’re making, the tradeoff was that it will make one thing really good and another thing just okay or pretty good. The thing that’s really good is a big bet, and it’s going to be really exciting. Everybody’s like, “Yeah.” And then they’re like, “But…” And then they’re like, “Yeah.” I’m actually having us write a little mini document that we can all sign, where it’s like, “We are making this tradeoff. This is the implication. This is how we’ll know we’re right or wrong, and here’s how we’re going to revisit this decision.” I want us all to at least cite it in Google Docs and be like, this is our joint commitment to this or else you end up with the next week of, “But…” It’s [a commitment to] revisit, so it’s not even “disagree and commit.”

It’s like, “Feel the pain. Understand it. Don’t go blindly into it forever.” I am a big believer in that when it comes to hard decisions, even decisions that could feel like two-way doors. The problem with two-way doors is it’s tempting to keep walking back and forth between them, so you have to walk through the door and say, “The earliest I would be willing to go back the other way is two months from now or with this particular piece of information.” Hopefully that quiets the internal critic of, “Well, it’s a two-way door. I’m always going to want to go back there.”

This brings me to a question that I’ve been dying to ask. You’re talking about next-generation models. You’re new to Anthropic. You’re building products on top of these models. I am not convinced that LLMs as a technology can do all the things people are saying they will do. But my personal p(doom) is that I don’t know how you get from here to there. I don’t know how you get from LLM to AGI. I see it being good at language. I don’t see it being good at thinking. Do you think LLMs can do all the things people want them to do?

I think, with the current generation, yes in some areas and no in others. Maybe what makes me an interesting product person here is that I really believe in our researchers, but my default belief is everything takes longer in life and in general and in research and in engineering than we think it will. I do this mental exercise with the team, which is, if our research team got Rip Van Winkled and all fell asleep for five years, I still think we’d have five years of product roadmap. We’d be terrible at our jobs if we can’t think of all the things that even our current models could do in terms of improving work, accelerating coding, making things easier, coordinating work, and even intermediating disputes between people, which I think is a funny LLM use case that we’ve seen play out internally around like, “These two people have this belief. Help us ask each other the right questions to get to that place.”

It’s a good sounding board as well. There’s a lot in there that is embedded in the current models. I would agree with you that the big open question, to me, is basically for longer-horizon tasks. What is the horizon of independence that you can and are willing to give the model? The metaphor I’ve been using is, right now, LLM chat is very much a situation where you’ve got to do the back and forth, because you have to correct and iterate. “No, that’s not quite what I meant. I meant this.” A good litmus test for me is, when can I email Claude and generally expect that an hour later it’s not going to give me the answer it would’ve given me in the chat, which would’ve been a failure, but it would’ve done more interesting things and gone to find out things and iterate on them and even self-critiqued and then respond. 

I don’t think we’re that far from that for some domains. We’re far from some other ones, especially those that involve either longer-range planning or thinking or research. But I use that as my capabilities piece. It’s less like parameter size or a particular eval. To me, again, it comes back to “what problem are you solving?” Right now, I joke with our team that Claude is a very intelligent amnesiac. Every time you start a new conversation, it’s like, “Wait, who are you again? What am I here for? What did we work on before?” Instead, it’s like, “All right, can we carry continuity? Can we have it be able to plan and execute on longer horizons, and can you start trusting it to get some more things in?” There are things I do every day that I’m like, I spent an hour on some stuff that I really wish I didn’t have to do, and it’s not particularly a leveraged use of my time, but I don’t think Claude could quite do it right now without a lot of scaffolding.

Here’s maybe a more succinct way to put a bow on it. Right now, the scaffolding needed to get it to execute more complex tasks doesn’t always feel worth the tradeoffs because you probably could have done it yourself. I think there’s an XKCD comic on time spent automating something versus time that you actually get to save doing it. That tradeoff is at different points on the AI curve, and I think that would be the bet is, can we shorten that time to value so that you can trust it to do more of those things that probably nobody really gets excited about — to coalesce all the planning documents that my product teams are working on into one document, write the meta-narrative, and circulate it to these three people? Like, man, I don’t want to do that today. I have to do it today, but I don’t want to do it today.

Well, let me ask you in a more numeric way. I’m looking at some numbers here. Anthropic has taken more than $7 billion of funding over the last year. You’re one of the few people in the world who’s ever built a product that has delivered a return on $7 billion worth of funding at scale. You can probably imagine some products that might return on that investment. Can the LLMs you have today build those products?

I think that’s an interesting way of asking that because the way I think about it is that the LLMs today deliver value, but they also help our ability to go build a thing that delivers that value. 

Let me ask you a threshold question. What are those products that can deliver that much value?

To me, right now, Claude is an assistant. A helpful kind of sidekick is the word I heard internally at some point. At what point is it a coworker? Because the joint amount of work that can happen, even in a growing economy with assistance, I think, is very, very large. I think a lot about this. We have Claude for Work. Claude for Work right now is almost a tool for thought. You can put in documents, you can sync things and have conversations, and people find value. Somebody built a small fission reactor or something that was on Twitter, not using Claude, but Claude was their tool for thought to the point where it is now an entity that you actually trust to execute autonomous work within the company. That delivered product, it sounds like a fanciful idea. I actually think the delivery of that product is way less sexy than people think.

It’s about permission management, it’s about identity, it’s about coordination, it’s about the remediation of issues. It’s all the stuff that you actually do in training a good person to be good at their job. That, to me, even within a particular discipline — some coding tasks, some particular tasks that involve the coalescence of information or researching, I get very excited about the economic potential for that and growing the economy. Each of those, getting to have the incremental person on your team, even if they’re not, in this case I’m okay with not net plus one productive, but net 0.25, but maybe there’s a few of them, and coordinated. I get very excited about the economic potential for that. And growing the economy.

And that’s all what, $20 a month? The enterprise subscription product.

I think the price point for that is much higher if you’re delivering that kind of value. But I was debating with somebody around what Snowflake, Databricks, Datadog, and others have shown. Usage-based billing is the new hotness. If we had subscription billing, now we have usage-based billing. The thing I would like to get us to, it’s hard to quantify today, although maybe we’ll get there, is real value-based billing. What did you actually accomplish with this? There are people that will ping us because a common complaint I hear is that people hit our rate limits, and they’re like, “I want more Claude.”

I saw somebody who was like, “Well, I have two Claudes. I have two different browser windows.” I’m like, “God, we have to do a better job here.” But the reason they’re willing to do that is that they write in and they say, “Look, I’m working on a brief for a client. They’re paying me X amount of money. I would happily pay another $100 to finish the thing so I can deliver it on time and move on to the next one.”

That, to me, is an early sign of where we fit, where we can provide value that is even beyond a $20 subscription. This is an early kind of product thinking, but these are the things I get excited about. When I think about deployed Claudes, being able to think about what value you are delivering and really align over time creates a very full alignment of incentives in terms of delivering that product. I think that’s an area we can get to over time.

I’m going to bring this all the way back around. We started by talking about distribution and whether things can get so tailored to their distribution that they don’t work in other contexts. I look around and see Google distributing Gemini on its phones. I look at Apple distributing Apple Intelligence on its phones. They’ve talked about maybe having some model interchangeability in there between, right now it’s OpenAI, but maybe Gemini or Claude will be there. That feels like the big distribution. They’re just going to take it and these are the experiences people will have unless they pay money to someone else.

In the history of computing, the free thing that comes with your operating system tends to be very successful. How are you thinking about that problem? Because I don’t think OpenAI is getting any money to be in Apple Intelligence. I think Apple just thinks some people will convert for $20 and they’re Apple and that’s going to be as good as it gets. How are you thinking about this problem? How are you thinking about widening that distribution, not optimizing for other people’s ideas?

I love this question. I get asked this all the time, even internally: what should we be pushing harder into an on-device experience? I agree it’s going to be hard to supersede the built-in model provider. Even if our model might be better at a particular use case, there’s a utility thing. I get more excited about can we be better at being close to your work? Work products have a much better history than the built-in sort of thing. Plenty of people do their work on Pages, I hear. But there’s still a real value for a Google Docs or even a Notion and other people that can go deep on a particular take on that productivity piece. It’s why I lean us heavier into helping people get things done.

Some of that will be mobile, but maybe as a companion and providing and delivering value that is almost independent of needing to be exactly integrated into the desktop. As an independent company trying to be that first call, that Siri, I’ve heard the pitch from startups even before I joined here. “We’re going to do that. We’re going to be so much better, and the new Action Button means that you can bring it up and then press a button.” I’m like, no. The default really matters there. Instagram never tried to replace the camera; we just tried to make a really good thing about what you could do once you decided that you wanted to do something novel with that photo. And then, sure, people took photos in there, but by the end, it was like 85 percent library, 15 percent camera. There’s real value to the thing that just requires the one click.

Every WWDC that would come around, pre-Instagram, I loved watching those announcements. I was like, “What are they going to announce?” And then you get to the point where you realize they’re going to be really good at some things. Google’s going to be great at some things. Apple’s going to be great at some things. You have to find the places where you can differentiate either in a cross-platform way, either in a depth of experience way, either in a novel take on how work gets done way, or be willing to do the kind of work that some companies are less excited to do because maybe at the beginning they don’t seem super scalable, like tailoring things.

Are there consumer-scalable $7 billion worth of consumer products that don’t rely on being built into your phone? I mean in AI specifically, AI products that can capture that much market without being built into the operating system on a phone.

I have to believe yes. I mean, I open up the App Store and ChatGPT is regularly second. I don’t know what their numbers look like in terms of that business, but I think it’s pretty healthy right now. But long term, I optimistically believe yes. Let’s conflate mobile and consumer for a second, which is not a super fair conflation, but I’m going to go with it. So much of our lives still happens there that whether it’s within LLMs plus recommendations, or LLMs plus shopping, or LLMs plus dating, I have to believe that at least a heavy AI component can be in a $7 billion-plus business, but not one where you are trying to effectively be Siri plus plus. I think that’s a hard place to be.

I feel like I need to disclose this: like every other media company, Vox Media has taken the money from OpenAI. I have nothing to do with this deal. I’m just letting people know. But OpenAI’s answer to this appears to be search. If you can claw off some percentage of Google, you’ve got a pretty good business. Satya Nadella told me about Bing when they launched the ChatGPT-powered Bing. Any half a percent of Google is a huge boost to Bing. Would you build a search product like that? We’ve talked about recommendations a lot. The line between recommendations and search is right there.

It’s not on my mind for any kind of near-term thing. I’m very curious to see it. I haven’t gotten access to it, probably for good reason, although I know Kevin Weil pretty well. I should just call him and be like, “Yo, put me on the beta.” I haven’t gotten to play with it. But that space of the Perplexitys and SearchGPT ties back to the very beginning of our conversation, which is search engines in the world of summarization and citations but probably fewer clicks. How does that all tie together and connect? It’s less core, I would say, to what we’re trying to do.

It sounds like right now the focus is on work. You described a lot of work products that you’re thinking about, maybe not so much on consumers. I would say the danger in the enterprise is that it’s bad if your enterprise software is hallucinating. Just broadly, it seems risky. It seems like those folks might be more inclined to see if you send some business haywire because the software is hallucinating. Is this something you can solve? I’ve had a lot of people tell me that LLMs are always hallucinating, and we’re just controlling the hallucinations, and I should stop asking people if they can stop hallucinating because the question doesn’t make any sense. Is that how you’re thinking about it? Can you control it so that you can build reliable enterprise products?

I think we have a really good shot there. The two places that this came up most recently was, one, our current LLMs will oftentimes try to do math. Sometimes they actually are, especially given the architecture, impressively good at math. But not always, especially when it comes to higher-order things or even things like counting letters and words. I think you could eventually get there. One tweak we’ve made recently is just helping Claude, at least on Claude AI, recognize when it is more in that situation and explain its shortcomings. Is it perfect? No, but it’s significantly improved that particular thing. This came directly from an enterprise customer that said, “Hey, I was trying to do some CSV parsing. I’d rather you give me the Python to go analyze the CSV than try to do it yourself because I don’t trust that you’re going to do it right yourself.”

On the data analysis code interpretation, that front, I think it’s a combination of having the tools available and then really emphasizing the times when it might not make sense to use them. LLMs are very smart. Sorry, humans. I still use calculators all the time. In fact, over time I feel like I get worse at mental math and rely on those even more. I think there’s a lot of value in giving it tools and teaching it to use tools, which is a lot of what the research team focuses on. 

The joke I do with the CSV version is like, yeah, I can eyeball a column of numbers and give you my average. It’s probably not going to be perfectly right, so I’d rather use the average function. So that’s on the data front. On the citations front, the app that has done this well recently is Dr. Becky, who’s a parenting guru and has a new app out. I like playing with chat apps, so I really try to push them. I pushed this one so hard around trying to hallucinate or talk about something that it wasn’t familiar with. I have to go talk to the makers, actually ping them on Twitter, because they did a great job. If it’s not super confident that that information is in its retrieval window, it will just refuse to answer. And it won’t confabulate; it won’t go there.

I think that is an answer as well, which is the combination of model intelligence plus data, plus the right prompting and retrieval so that you don’t want it to answer unless there actually is something grounded in the context window. All of that helps tremendously on that hallucination front. Does it cure it? Probably not, but I would say that all of us make mistakes. Hopefully they’re predictably shaped mistakes so you can be like, “Oh, danger zone. Talking outside of our piece there.” Even the idea of having some almost syntax highlighting for like, “This is grounded from my context. This is from my model knowledge. This is out of distribution. Maybe there’s something there.

This all just adds up to my feeling that prompt engineering and then teaching a model to behave itself feels nondeterministic in a way. The future of computing is this misbehaving toddler, and we have to contain it and then we’ll be able to talk to our computers like real people and they’ll be able to talk to us like real people. That seems wild to me. I read the system prompts, and I’m like, this is how we’re going to do it? Apple’s system prompt is, “Do not hallucinate.”

I love that.

It’s like, “This is how we’re doing it?” Does that feel right to you? Does that feel like a stable foundation for the future of computing?

It’s a huge adjustment. I’m an engineer at heart. I like determinism in general. We had an insane issue at Instagram that we eventually tracked down to using non-ECC RAM, and literal cosmic rays were flipping RAM. When you get to that stuff, you’re like, “I want to rely on my hardware.” 

There was actually a moment, maybe about four weeks into this role, where I was like, “Okay, I can see the perils and potentials.” We were building a system in collaboration with a customer, and we talked about tool use, what the model has access to. We had made two tools available to the model in this case. One was a to-do list app that it could write to. And one was a reminder, a sort of short-term or timer-type thing. The to-do list system was down, and it’s like, “Oh man, I tried to use the to-do list. I couldn’t do it. You know what I’m going to do? I’m going to set a timer for when you meant to be reminded about this task.” And it set an absurd timer. It was a 48-hour timer. You’d never do that on your phone. It would be ridiculous. 

But it, to me, showed that nondeterminism also leads to creativity. That creativity in the face of uncertainty is ultimately how I think we are going to be able to solve these higher-order, more interesting problems. That was a moment when I was like, “It’s nondeterministic, but I love it. It’s nondeterministic, but I can put it in these odd situations and it will do its best to recover or act in the face of uncertainty.”

Whereas any other sort of heuristic basis, if I had written that, I probably would never have thought of that particular workaround. But it did, and it did it in a pretty creative way. I can’t say it sits totally easily with me because I still like determinism and predictability in systems, and we seek predictability where we can find it. But I’ve also seen the value of how, within that constraint, with the right tools and the right infrastructure around it, it could be more robust to the needed messiness of the real world.

You’re building out the product infrastructure. You’re obviously thinking a lot about the big products and how you might build them. What should people be looking for from Anthropic? What’s the major point of product emphasis?

On the Claude side, between the time we talk and the show airs, we’re launching Claude for Enterprise, so this is our push into going deeper. On the surface, it’s a bunch of unexciting acronyms like SSO and SCIM and data management and audit logs. But the importance of that is that you start getting to push into really deep use cases, and we’re building data integrations that make that useful as well, so there’s that whole component. We didn’t talk as much about the API side, although I think of that as an equally important product as anything else that we’re working on. On that side, the big push is how we get lots of data into the models. The models are ultimately smart, but I think they’re not that useful without good data that’s tied to the use case.

How do we get a lot of data in there and make that really quick? We launched explicit prompt caching last week, which basically lets you take a very large data store, put it in the context window, and retrieve it 10 times faster than before. Look for those kinds of ways in which the models can be brought closer to people’s actual interesting data. Again, this always ties back to Artifact — how can you get personalized useful answers in the moment at speed and at a low cost? I think a lot about how good product design pushes extremes in some direction. This is the “lots of data, but also push the latency extreme and see what happens when you combine those two axes.” And that’s the thing we’ll continue pushing for the rest of the year.

Well, Mike, this has been great. I could talk to you forever about this stuff. Thank you so much for joining Decoder.

It was great to be here.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Go to Source