Cloudflare Is Eating Itself: Why They're Rebuilding on Workers

3 min read 1 source explainer
├── "Genuine infrastructure dogfooding creates structural quality advantages that marketing-level dogfooding cannot"
│  ├── Cloudflare (Cloudflare Blog) → read

Cloudflare argues that rebuilding its own critical services — firewall rules, DNS resolution, AI gateway — on the same Workers runtime customers use creates a convergence where internal bugs are caught before they affect external users. The thesis is that if Cloudflare engineers can't build production services on Workers, external developers shouldn't be expected to either.

│  └── top10.dev editorial (top10.dev) → read below

The editorial highlights that this level of dogfooding is rare among platform companies — AWS doesn't run EC2's control plane on Lambda, Google doesn't serve Search from Cloud Run. When the vendor's own firewall runs on the same isolate runtime as customer workloads, it creates alignment incentives that are nearly impossible to fake.

├── "V8 isolates are viable as a general-purpose server runtime, not just an edge scripting tool"
│  └── Cloudflare (Cloudflare Blog) → read

By migrating stateful, latency-sensitive internal services onto Workers, Cloudflare is making an architectural bet that V8 isolates can handle workloads traditionally reserved for bespoke C/C++ and Go implementations. This goes far beyond the original 2017 Workers pitch of lightweight edge compute and positions isolates as a full server runtime.

└── "Platform companies claiming to use their own tools have historically been disingenuous, making this announcement noteworthy"
  └── top10.dev editorial (top10.dev) → read below

The editorial notes that the strong HN response (917 points) reflects practitioner skepticism built up over years of watching platform companies say 'we use our own tools' while quietly running critical paths on entirely different infrastructure. Cloudflare's detailed technical disclosure of migrating real production services is what distinguishes this from typical marketing claims.

What happened

Cloudflare published "Building for the Future," a detailed look at how the company is restructuring its internal architecture around its own Workers platform. The post — which hit 917 points on Hacker News — lays out a multi-year effort to migrate core Cloudflare services from bespoke C/C++ and Go implementations onto the same V8-isolate-based Workers runtime that external developers use.

This isn't a marketing exercise. Cloudflare is rebuilding its own products on the same platform it sells, making the internal and external developer experience converge. The move affects everything from their firewall rules engine to DNS resolution logic to the newer AI gateway products. The thesis: if Cloudflare's own engineers can't build production services on Workers, external developers shouldn't be expected to either.

The 917-point HN discussion suggests this resonated with practitioners who've watched platform companies say "we use our own tools" while quietly running critical paths on entirely different infrastructure.

Why it matters

The "dogfooding at infrastructure scale" pattern is rare. AWS doesn't run EC2's control plane on Lambda. Google doesn't serve Search from Cloud Run. When a platform company genuinely moves its own critical path onto the developer-facing platform, it creates alignment incentives that are nearly impossible to fake.

When your firewall engine runs on the same isolate runtime as customer workloads, bugs in that runtime get caught internally before they hit customers. This is the structural advantage — not philosophical commitment to dogfooding, but a literal reduction in the surface area of divergence between what the vendor tests and what you deploy on.

The architectural bet is also a statement about V8 isolates as a general-purpose server runtime. Cloudflare has been pushing this thesis since Workers launched in 2017, but migrating their own stateful, latency-sensitive services onto it is a different order of confidence. It suggests they've solved — or believe they've solved — the cold start, memory isolation, and state management problems that have kept many teams on containers.

The composability angle matters too. When every Cloudflare product is a Worker under the hood, they can expose internal APIs as building blocks. Your Workers code can call the same primitives that Cloudflare's own WAF uses. This is the platform play: not just "run your code at the edge" but "compose our entire product surface programmatically."

What this means for your stack

If you're building on Cloudflare Workers today: This is strong signal that the platform has legs. Companies don't migrate their own revenue-critical services onto infrastructure they plan to deprecate. The convergence also means performance improvements Cloudflare makes for internal services will directly benefit external Workers — a rising-tide dynamic that container-based platforms rarely achieve.

If you're evaluating edge compute platforms: The dogfooding story changes the competitive calculus. Deno Deploy, Fastly Compute, and Vercel Edge Functions don't have this structural advantage — their parent companies don't run their own core products on the same runtime. That said, convergence is also lock-in: the deeper Cloudflare products integrate with Workers internals, the harder it becomes to replicate that behavior elsewhere.

If you're an architect deciding between Workers and containers: The question isn't "can V8 isolates handle my workload" anymore — it's "can they handle Cloudflare's workload." If Cloudflare trusts isolates for DNS resolution and firewall rule evaluation at millions of requests per second, the performance ceiling argument against Workers is effectively dead. The remaining questions are about ecosystem (npm compatibility, native modules) and operational tooling (observability, debugging).

Looking ahead

Cloudflare's self-hosting bet is a multi-year migration, not a flip-switch. Expect gradual announcements as individual products move onto Workers, each one unlocking new composability primitives for external developers. The strategic question for the industry: does this force AWS and Google to do the same with Lambda and Cloud Functions, or do they continue running control planes on bespoke infrastructure while selling a different product to customers? The answer will shape whether "serverless" remains a deployment model or becomes the actual substrate of cloud infrastructure.

Hacker News 1295 pts 932 comments

Building for the Future

→ read on Hacker News

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.