Aphyr: AI Slop Is Creating a Parallel Economy of Verification Jobs

4 min read 1 source clear_take
├── "AI slop doesn't eliminate jobs — it creates a parallel economy of verification, curation, and cleanup"
│  └── Kyle Kingsbury (Aphyr) (aphyr.com) → read

Kingsbury argues that flooding every information channel with synthetic content of uncertain accuracy doesn't destroy jobs but reshapes them into verification roles. He traces this pattern across AI-generated code reviews requiring human re-review, synthetic customer support escalating to real humans, and AI blog posts that still need fact-checking. His thesis reframes 'AI will take your job' into 'AI will transform your job into checking AI's work.'

├── "AI's most dangerous failure mode is producing output that looks correct but isn't — plausible-looking wrongness is worse than obvious wrongness"
│  └── Kyle Kingsbury (Aphyr) (aphyr.com) → read

Drawing on his Jepsen experience exposing correctness bugs in databases that vendors claimed were consistent, Kingsbury applies the same forensic skepticism to AI output. He argues the core problem isn't that AI produces bad work — it's that AI produces bad work that passes surface-level inspection, hallucinating plausible API endpoints, fake citations, and tests that pass for the wrong reasons. This makes verification harder and more expensive than if the output were obviously wrong.

└── "The gap between AI capability demos and production reliability is at an all-time high"
  └── top10.dev editorial (top10.dev) → read below

The editorial synthesis contextualizes Kingsbury's argument within the broader moment: AI demos are more impressive than ever while production reliability lags further behind. The editorial notes that engineering organizations are already experiencing the job-shapeshifting Kingsbury describes, with roles transforming rather than disappearing as teams absorb the overhead of validating AI-generated artifacts.

What happened

Kyle Kingsbury — better known as Aphyr, the engineer behind the Jepsen distributed systems testing suite — published a characteristically sharp essay titled *"The Future of Everything Is Lies, I Guess: New Jobs."* The post landed on Hacker News with a score north of 190, striking a nerve with developers who've spent the last two years watching AI-generated content metastasize across every surface of their professional lives.

The thesis is deceptively simple: when you flood every information channel with synthetic content of uncertain provenance and accuracy, you don't eliminate jobs — you create a parallel economy of verification, curation, and cleanup. Kingsbury traces this pattern across multiple domains — from AI-generated code reviews that require human re-review, to synthetic customer support interactions that escalate to (surprise) a real human, to SEO-optimized AI blog posts that someone still has to fact-check before they ship.

This isn't Kingsbury's first rodeo with uncomfortable truths about systems that claim to work but don't. His Jepsen project famously exposed correctness bugs in databases that vendors swore were consistent — CockroachDB, MongoDB, Elasticsearch, and others all got the treatment. That same forensic skepticism is now pointed at the AI-augmented labor market.

Why it matters

The essay arrives at a moment when the gap between AI capability demos and AI production reliability has never been wider. OpenAI's GPT-4o can write a plausible RFC. It can also hallucinate an API endpoint that doesn't exist, cite a paper that was never published, or generate test cases that pass for the wrong reasons. The failure mode isn't that AI output is bad — it's that it's bad in ways that look good, which is significantly worse.

Kingsbury's central observation reframes the entire "AI will take your job" discourse. The jobs aren't disappearing — they're shapeshifting. Consider what's actually happening in engineering organizations right now:

- Code review load is increasing, not decreasing, because AI-generated PRs require more scrutiny, not less. A human-written PR carries implicit context about why certain choices were made. An AI-generated PR carries no such signal — every line needs verification against intent. - Content teams are hiring "AI editors" — people whose sole job is to take LLM output and make it accurate, on-brand, and not embarrassing. This role didn't exist three years ago. Now it's on LinkedIn with five-figure job postings. - Security teams are dealing with AI-generated phishing that's grammatically perfect and contextually plausible, requiring new detection layers that amount to... more human analysts reviewing flagged content.

The pattern Kingsbury identifies has a name in economics: the Jevons paradox. When you make something dramatically cheaper to produce, you don't produce less of it — you produce so much more that total resource consumption (in this case, human attention for verification) actually increases. AI didn't reduce the need for human judgment; it manufactured an ocean of content that requires more human judgment than ever.

The Hacker News discussion surfaced a telling split. Some commenters pushed back, arguing that verification tools will themselves be automated — AI checking AI. Kingsbury's implicit counter: that's just adding another layer of output that someone eventually has to trust. Turtles all the way down. At the bottom of every stack, there's a person who has to decide whether to ship.

What this means for your stack

If you're a senior engineer or engineering leader, the practical implications are concrete:

Budget for verification, not just generation. If your team is adopting Copilot, Cursor, or any AI coding assistant, the ROI model needs to include increased review time. The data from Google's internal studies and Microsoft's own research consistently shows that AI-assisted developers produce more code, but the net productivity gain is smaller than the raw output increase suggests — because review and debugging absorb the difference. Plan for a 1.5x review multiplier on AI-assisted PRs, and you'll be closer to reality than the vendor pitch deck suggests.

"Human in the loop" is a cost center, not a feature. Every architecture diagram that includes a "human review" box is implicitly budgeting for a permanent headcount line. Treat it accordingly. If your ML pipeline has a human labeling step, that's not a temporary training phase — for many applications, that's the steady state.

Hire for judgment, not just output. The new premium skill isn't writing code or writing prose — it's evaluating whether code or prose is correct, complete, and safe to ship. The engineers who'll be most valuable in the next five years aren't the fastest producers; they're the ones who can reliably distinguish good AI output from plausible-looking garbage. This is, ironically, exactly the skill set that Kingsbury has built his career on — testing systems that claim to work correctly.

Looking ahead

Kingsbury's essay works because it names a dynamic that's already playing out but hasn't been clearly articulated in the industry conversation. The AI hype cycle is relentlessly focused on capability — what models can do. The operating reality is increasingly about reliability — what models can be trusted to do without supervision. The gap between those two creates jobs. Lots of them. Just not the ones anyone was predicting. The future of everything might indeed be lies, but the future of employment is people who can spot them — and that's a growth industry with no ceiling in sight.

Hacker News 223 pts 148 comments

The Future of Everything Is Lies, I Guess: New Jobs

→ read on Hacker News
ej88 · Hacker News

I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be direc

mrdependable · Hacker News

I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like sayin

simonw · Hacker News

Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.

siliconc0w · Hacker News

The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and does

ai_critic · Hacker News

I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent &quot

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.