The editorial argues the pattern is unmistakable: Google's investment dollars flow to Anthropic, and a substantial portion flows right back as TPU revenue. Google is effectively subsidizing its own cloud hardware sales and calling it a venture investment, with the recent TPU capacity deal serving as direct evidence.
Bloomberg's reporting highlights that Google's $10B upfront commitment (with $30B more to follow) strengthens a relationship where the two companies are simultaneously partners and rivals, implicitly acknowledging the intertwined financial structure where investment and cloud spending are deeply linked.
The editorial surfaces a more charitable framing from Hacker News discussion: rather than a cynical circle, Google is acting like a vendor-financier — fronting capital to lock in a huge cloud customer while retaining equity upside if Anthropic succeeds. This reframes the deal as rational industrial strategy rather than accounting gimmickry.
The editorial notes Amazon has made similar tied investments in Anthropic linked to AWS usage commitments, but argues the $40B potential total dwarfs Amazon's reported $8B and represents a new tier of infrastructure-scale capital. The sheer magnitude makes Anthropic the most heavily funded AI startup and raises questions about whether these arrangements distort how cloud revenue is reported.
Bloomberg frames the deal around the paradox that Google is investing billions in a company whose flagship product Claude competes directly with Google's own Gemini models. This positions the investment as reflecting the AI arms race's strange bedfellows dynamic, where competitive threats are less important than securing infrastructure dominance.
Google announced plans to invest up to $40 billion in Anthropic PBC, with $10 billion committed upfront and another $30 billion potentially to follow. The deal deepens a relationship that is simultaneously a partnership and a rivalry — Google is funding a company whose flagship product, Claude, competes directly with Google's own Gemini models.
The timing matters. Just weeks earlier, Anthropic signed a deal to purchase "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom. This isn't the first such arrangement; previous rounds of Google investment in Anthropic have been closely paired with Anthropic's cloud compute commitments. The pattern is unmistakable: Google's investment dollars flow to Anthropic, and a substantial portion flows right back as TPU revenue.
The deal values Anthropic at a level that reflects the current frenzy around frontier AI labs, though Bloomberg's reporting leaves the exact valuation unstated. What is clear is that this puts Google's total commitment to Anthropic in a league that dwarfs typical venture rounds — this is infrastructure-scale capital.
### The circular economics are the real story
Some observers have labeled these arrangements "circular deals," and the shoe fits. Google invests billions in Anthropic. Anthropic turns around and spends billions on Google Cloud TPU capacity. Google books that as cloud revenue. One Hacker News commenter put it more charitably: think of it less as a circle and more as Google financing a massive TPU customer while taking equity upside. Either way, Google is effectively subsidizing its own cloud hardware sales and calling it a venture investment.
This isn't unique to Google — Amazon has played the same game with its own multi-billion-dollar Anthropic investments tied to AWS usage commitments. But the scale here is new. A potential $40B total investment dwarfs Amazon's reported $8B and makes Anthropic the most heavily funded AI startup in history by a wide margin.
### Anthropic was capacity-constrained, and it showed
The community discussion around this deal surfaced a detail that practitioners will find more immediately relevant than the dollar figures. Multiple commenters noted that Anthropic appeared to be hitting serious capacity constraints in recent weeks. The subtext of both the Amazon and Google deals, signed in rapid succession, is that Anthropic needed compute badly enough to accept terms that several observers characterized as "somewhat adverse."
This tracks with widespread user reports of degraded Claude model quality over the same period — slower responses, more refusals, outputs that felt like a smaller model was handling the load. If the capacity thesis is correct, the TPU deal should relieve those constraints, and the timing correlates with reports that model quality has recently bounced back.
### Google is hedging against its own AI team
Perhaps the most telling data point came from a self-identified Google Chrome engineer on Hacker News: "I work at Google for Chrome, I can assure you nobody in our team is using Gemini over Claude." This is one engineer's team, not a company-wide survey, but it echoes a pattern that's been visible for months. Google employees, particularly on developer tooling and engineering teams, have been gravitating toward Claude despite having free, integrated access to Gemini.
This creates a strategic calculus that is almost paradoxical. Google is simultaneously:
1. Spending billions developing Gemini in-house 2. Investing $40B in Gemini's most capable competitor 3. Selling that competitor the compute infrastructure to stay competitive 4. Watching its own engineers prefer the competitor's product
The rational explanation is that Google has decided the "who wins the model race" question matters less than "who captures the compute revenue." Whether the winning model is Gemini or Claude, Google wants it running on TPUs.
### Capacity improvements should be tangible
If you've been experiencing degraded Claude performance — rate limits, quality drops, longer latencies — the multi-gigawatt TPU deal should translate into real capacity improvements over the coming months. Don't expect overnight changes; standing up new TPU pods takes time. But the trajectory is toward more available compute, which means more consistent model quality and higher rate limits for API customers.
### The commodification question is still open
One commenter raised the point that all progress points toward commodification of foundation models, citing Google's own leaked "We have no moat" memo from 2023. If that thesis holds, Anthropic's $40B valuation is wildly overinflated. Anthropic's counter-bet is that their B2B positioning, safety research brand, and enterprise trust will create durable differentiation even as raw model capabilities converge. For teams making vendor choices today, the practical advice hasn't changed: build abstractions that let you swap model providers, because the landscape is moving too fast to marry a single vendor.
### Watch the lock-in dynamics
The circular investment structure creates subtle lock-in pressures. Anthropic's infrastructure is now deeply tied to both AWS (Amazon investment) and Google Cloud (this deal). If you're running Anthropic models via their API, your traffic is flowing through infrastructure controlled by companies that are both investors in and competitors to your model provider. For most teams, this is an abstraction layer you'll never notice. For teams with data sovereignty requirements or competitive concerns (say, you're building something that competes with Google or Amazon products), it's worth understanding the infrastructure dependency chain.
The AI investment arms race has reached a scale where the numbers are almost meaningless to parse in traditional venture terms. A $40B investment isn't a bet on a startup; it's an infrastructure financing arrangement dressed in venture capital clothing. The real question isn't whether Anthropic is worth $40B — it's whether Google's TPU business is worth subsidizing to this degree to keep a major customer locked into the platform. For Google, the math probably works regardless of whether Anthropic or Gemini wins the model race. For everyone else building on these models, the message is simpler: more compute is coming, capacity constraints should ease, and the best strategy remains staying portable.
Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.Some people call this sort of thing a "circular deal", but perhaps a better way to think of it i
I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.
It feels like the market is full Wiley Coyote on frontier model makers, and I like Anthropic's B2B business model.But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary pl
It feels like Anthropic is everybody's insurance policy against someone else winning the AI race. So you have Amazon, Google, Microsoft basically every major tech company pushing their own tech hard but simultaneously ensuring they have a survival level stake in Anthropic if they can't bui
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
https://archive.ph/u274V