The editorial argues this isn't a conventional venture investment — Google is essentially funding Anthropic to buy more Google products (TPU capacity from Google and Broadcom). The money flows in a circle: investment dollars go to Anthropic, which then spends them on Google Cloud infrastructure, making it a self-reinforcing capital loop.
The editorial highlights that Google gets a strategic hedge against Gemini failing to compete, citing a Google Chrome engineer who said on Hacker News that 'nobody in our team is using Gemini over Claude.' By investing in Anthropic, Google gains influence over a potential competitor without triggering antitrust scrutiny that a full acquisition would invite.
The editorial notes that Anthropic signed 'somewhat adverse contracts' with both Amazon and Google in quick succession, suggesting the company was approaching serious capacity constraints. Anthropic needs massive compute to stay competitive at the frontier, and this deal — along with its multi-gigawatt TPU purchase — signals desperation for infrastructure at any cost.
Bloomberg's reporting frames the $40 billion figure as the largest single investment in an AI company to date, dwarfing Amazon's ~$8B in Anthropic and Microsoft's $13B in OpenAI. The conditional structure — $10B firm with $30B contingent on milestones — underscores both the ambition and uncertainty at this scale.
The editorial emphasizes that AI investment numbers have become so large they're abstract, noting that $40B exceeds most countries' entire defense budgets. This signals a new phase in the AI arms race where the capital requirements to remain competitive at the frontier are extraordinary.
Google is investing $10 billion in Anthropic PBC, with an additional $30 billion contingent on future milestones — a deal that would total $40 billion and represent the largest single investment in an AI company to date. The news, reported by Bloomberg on April 24, extends a relationship that has grown steadily more entangled over the past two years.
This isn't a conventional venture investment — it's a hardware-compute-capital loop where the money flows in a circle. Just weeks ago, Anthropic signed a deal to purchase "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom. Previous rounds already tied Anthropic to Google Cloud infrastructure. Now Google is essentially funding Anthropic to buy more Google products, while Anthropic gets the compute it desperately needs to stay competitive at the frontier.
The deal also comes shortly after Anthropic signed what community observers describe as "somewhat adverse contracts" with both Amazon and Google in quick succession — suggesting the company was approaching serious capacity constraints.
The AI investment landscape has entered a phase where the numbers are so large they've become abstract. Amazon has committed roughly $8 billion to Anthropic previously. Microsoft has invested $13 billion in OpenAI. But $40 billion — even with $30 billion of it conditional — is a different order of magnitude. At $40B, Google would be investing more in Anthropic than most countries spend on their entire defense budgets.
The strategic logic is layered. Google gets several things simultaneously: a hedge against Gemini underperforming (and the internal signals aren't great — as one Google Chrome engineer put it bluntly on Hacker News, "nobody in our team is using Gemini over Claude"), a massive customer for its TPU hardware business, and influence over a potential competitor without triggering the antitrust scrutiny that a full acquisition would invite.
Anthopic gets what it needs most: compute. The company's recent behavior — signing deals with both Amazon and Google in rapid succession, the sudden improvement in model quality that community members have noticed — suggests it was hitting a wall. As one HN commenter observed, "the subtext of the last few weeks is that Anthropic was becoming severely capacity constrained. They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. Suddenly model quality is back up again."
But the "circular deal" criticism is hard to dismiss entirely. Google invests in Anthropic. Anthropic buys TPUs from Google. Google books the TPU revenue. The cash goes around. The more charitable interpretation: this is a joint venture structured as an investment, where Google provides silicon and Anthropic provides research talent, and the corporate structure lets both maintain independence. The less charitable interpretation: it's financial engineering that inflates both companies' metrics.
There's a deeper tension beneath the headline numbers. The AI industry's own insiders have been warning that foundation models are commoditizing. Google's own leaked memo — "We Have No Moat, and Neither Does OpenAI" — named this explicitly. If frontier models converge in capability, as many practitioners believe they will, then pouring $40 billion into one model maker looks less like a bet on technology and more like a bet on distribution and ecosystem lock-in.
This is where Anthropic's B2B business model becomes relevant. Unlike OpenAI, which has pursued consumer products aggressively, Anthropic has focused on enterprise API access and developer tools. Anthropic's real asset isn't just Claude — it's the developer relationships and enterprise contracts that would survive even if model quality parity arrives. The investment may be less about funding research breakthroughs and more about ensuring that when enterprises standardize their AI stack, they standardize on something that runs on Google infrastructure.
The market seems to agree. As one observer noted, the market is "full Wile E. Coyote on frontier model makers" — running on momentum without looking down. The question isn't whether these companies can build impressive models. It's whether the economics of building them can ever produce returns proportional to the capital consumed.
If you're building on Claude's API today, the practical implications are mostly positive. Anthropic isn't running out of money anytime soon. Capacity constraints that may have affected API availability and model quality appear to be easing. The Google relationship means TPU access, which means training and inference capacity.
But the entanglement cuts both ways. If you're running Claude workloads on AWS (via Bedrock), you should be aware that Anthropic now has deep financial relationships with both AWS and Google Cloud — and these relationships come with infrastructure commitments that may eventually influence pricing, availability, or feature parity across clouds. The days of Anthropic as a purely neutral API provider are over, if they ever existed.
For teams evaluating AI providers: this is another data point suggesting you should architect for model portability. The corporate relationships between model providers and cloud platforms are becoming so complex that today's preferred integration path could become tomorrow's awkward dependency. Abstract your AI calls behind an interface. Test against multiple providers. The switching costs you avoid now will matter when the next round of deals reshuffles the deck.
The $40 billion figure — even with its contingencies — marks the moment AI investment crossed from "large technology bets" into "infrastructure-scale capital deployment" territory. Google is treating Anthropic less like a startup investment and more like a component of its cloud hardware business. Anthropic is treating Google less like an investor and more like a utility provider. The result is something that doesn't map neatly onto traditional categories: not quite an acquisition, not quite a partnership, not quite a customer relationship, but all three simultaneously. For developers, the signal is clear: the AI layer of your stack is now backed by the same depth of capital that built the cloud platforms themselves. Whether that capital will produce proportional value — or whether we're watching the most expensive game of musical chairs in technology history — remains the $40 billion question.
Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.Some people call this sort of thing a "circular deal", but perhaps a better way to think of it i
I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.
It feels like the market is full Wiley Coyote on frontier model makers, and I like Anthropic's B2B business model.But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary pl
It feels like Anthropic is everybody's insurance policy against someone else winning the AI race. So you have Amazon, Google, Microsoft basically every major tech company pushing their own tech hard but simultaneously ensuring they have a survival level stake in Anthropic if they can't bui
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
https://archive.ph/u274V