Anthropic C.E.O.: Massive A.I. Spending Could Haunt Some Companies
Dario Amodei, C.E.O. and Co-Founder of Anthropic, speaks with Andrew Ross Sorkin, DealBook, NY Times Video
Timeline
0:00 – Intro
3:27 – AI bubble? (tech vs economics)
4:50 – Scaling laws / capability gains
5:37 – Revenue growth (enterprise)
7:03 – “Cone of uncertainty” / data-center lag
8:44 – Over/under-buy compute risks
12:35 – “Circular deals” / vendor financing
15:14 – Chip depreciation / obsolescence
17:01 – Model competition / enterprise focus
19:16 – Moats / switching costs (API)
21:08 – AGI trajectory (scaling)
22:58 – Chips to China / national security
25:17 – Surveillance / democratic constraints
27:06 – Regulation debate / “capture” claim
31:13 – Jobs / policy responses
35:31 – Closing
Summary
Andrew Ross Sorkin interviews Dario Amodei, the CEO of Anthropic (maker of Claude), to get at a question investors keep circling: if AI is as important as everyone says, why does the spending around it still feel like a classic boom cycle? Amodei’s core move is to split the topic into two different debates that often get muddled together. On the one hand, he says, the technology itself is on a fairly predictable trajectory. On the other hand, the economics—how companies pay for it, when revenues arrive, and who overextends—can still go very wrong even if the technology works exactly as promised.
On the technology side, Amodei is unusually confident. He says he and his colleagues were among the first to document “scaling laws,” which is basically the observation that if you keep feeding these systems more computing power and more data—and occasionally add small improvements—performance rises across a wide range of tasks. The key point for a non-technical reader is that this improvement isn’t limited to cute chat features. He argues models are getting better at the kinds of work that drive real economic value: coding, scientific research, biomedical tasks, law, finance, and other knowledge-heavy activities. That’s why he’s not surprised that AI has become central to business and markets so quickly. In his telling, the capability trend has been visible for years, and the recent surge is the moment that trend started showing up in revenue and mainstream adoption.
Where he becomes more cautious is the money cycle that sits on top of that technology. Sorkin presses him directly: with so much capital pouring into chips and data centers—sums that are extraordinary even by tech standards—are we watching a bubble form? Amodei doesn’t say “yes, it’s a bubble” in a simple way, but he also doesn’t dismiss the risk. Instead, he describes a structural problem: AI companies have to make giant commitments before they can know exactly how big the market will be when those commitments come due.
He explains this using what he calls a “cone of uncertainty.” Imagine you’re running an AI company and your revenue has been exploding. You can try to forecast next year and the year after, but the range of plausible outcomes is still very wide—maybe demand is strong but not insane, maybe it’s even stronger than you expect, or maybe customers pause spending for a while. That would be normal uncertainty in any fast-changing industry. The problem is that AI requires long-lead-time infrastructure. Data centers and compute capacity don’t appear instantly; they can take a year or two to build and contract for. So executives have to decide now how much compute they’ll need in 2027, before they can see what 2027 demand actually looks like.
That creates a two-sided trap. If you underbuy compute, you can’t serve all the customers who want your product, and those customers will go to competitors. If you overbuy compute, you’re stuck paying for enormous capacity you may not fully use, which can crush margins and—at the extreme—threaten solvency. This is the investing risk he keeps coming back to: the technology can be real, demand can be real, and yet the financial structure can still produce a painful shakeout if too many players guess wrong at the same time.
To support the idea that real demand exists, Amodei cites Anthropic’s revenue growth as evidence that businesses are already paying for these tools at meaningful scale, especially on the enterprise side. But he’s careful to say that extrapolating early hypergrowth forever is naïve. The point isn’t “we’ll grow 10x forever,” but rather: the value is showing up in a way that makes continued investment rational—until it isn’t, and the timing matters.
That’s also the context for the “circular deals” discussion. Sorkin raises a concern investors have voiced: chipmakers and cloud giants invest in AI labs, and the labs then spend heavily on the chipmakers’ hardware and the cloud providers’ infrastructure—so it can look like everyone is financing everyone else’s growth. Amodei argues that in many cases this is less sinister than it sounds and closer to vendor financing: if building a large compute cluster costs tens of billions of dollars, a smaller AI company can’t pay for it all upfront, so a bigger partner funds part of it (often because the bigger partner has a direct incentive—selling chips, selling cloud services, or securing an important customer). Done in moderation, he says, it’s a bridge that helps align spending with future revenue. But he also acknowledges the danger: if companies stack these commitments so aggressively that they require implausible future revenues to justify them, the system becomes fragile.
Sorkin then digs into a related point that matters a lot for the “AI capex trade”: how long do these chips stay economically valuable? Amodei says the chips don’t just “stop working.” The real issue is that new chips arrive that are faster and cheaper, and competitive pressure can make older chips less valuable sooner than people expect. In other words, even if hardware lasts physically, the economic depreciation can be fast because your rivals upgrade and you can’t afford to fall behind.
When the conversation shifts to competition, Amodei tries to differentiate Anthropic’s position. He says some rivals are fighting intense consumer battles—where distribution, habit, and mindshare can become winner-take-most—and that can create “code red” urgency. Anthropic, he claims, is more focused on enterprise customers, where what matters most is reliability, controlled performance, and fitting into business workflows. Here he makes an important, somewhat counterintuitive point for readers: even selling “just the model” can become sticky. Businesses integrate a model into products and processes, train employees around it, build prompts and tooling around its quirks, and depend on its output style. Once that happens, switching isn’t frictionless—even if the underlying product looks similar on paper. That’s his argument for why there can be durable business value here, not just a race where users constantly hop to whoever has the newest demo.
On the big “AGI” question—whether we’ll need some entirely new invention beyond today’s approach—Amodei stays consistent with his earlier bullishness. He doesn’t like treating AGI as a single finish line, and he describes progress as a continuing climb: models get better at more tasks each generation, and there may not be one dramatic “switch flip” moment where everything changes. His claim is that scaling plus occasional small improvements can continue pushing capability forward.
The interview then turns to national security and policy, where Amodei has been outspoken. He argues the United States should not sell the most advanced chips to China because advanced AI could become something like a “country of geniuses in a data center”—a strategic capability that affects intelligence, defense, research, and economic power. He frames this less as ordinary competition and more as a race with authoritarian states that could use such systems for surveillance and coercion. At the same time, he acknowledges that democracies can also abuse AI, and he offers a guiding principle: use AI aggressively for legitimate security needs, but avoid using it in ways that make democratic societies resemble authoritarian ones.
Regulation comes up because Amodei has been accused (by critics like David Sacks) of fear-mongering and trying to shape regulation to favor incumbents. His defense is that he’s been raising these issues since before Anthropic existed, and he argues the regulations he supports carve out exemptions for smaller players. He opposes the idea of putting regulation on ice for a decade, framing it as reckless given the pace and scale of what’s being built.
Finally, the conversation lands on jobs, which is where Amodei’s “warning” posture is most explicit. He says he raises the risk of job displacement not to be a doomer, but because acknowledging the risk is how you avoid stepping on it blindly. He describes three layers of response. First, companies will inevitably automate some workflows end-to-end—meaning fewer people needed for certain tasks—but they can also use AI to create new products and new value, where humans become more productive rather than eliminated. Second, he expects government will have to play a role—retraining won’t solve everything, but policy will matter if productivity rises quickly and gains concentrate. Third, he suggests that over the long run society may have to rethink how central work is to life, echoing Keynes’s old idea that technology could reduce the need for long working hours if the benefits are broadly shared.
If you want the clean “market lens” takeaway: Amodei is saying the tech trend looks real and compounding, but the investment cycle is vulnerable to the classic problem of capital intensity plus uncertainty plus long lead times. That combination can produce overbuilding and a shakeout even in a world where AI is genuinely transformative.


