Kingsbury argues that resumes, cover letters, take-homes, and live coding screens are all low-bandwidth artifacts that a $20/month LLM can now forge trivially. Drawing from his own job search, he documents candidates whose resumes perfectly pattern-match job descriptions, uniform LLM-cadence cover letters, and interviewees whose eyes flick to a second monitor mid-question — concluding the cheapest path through every hiring gate is now to let a model lie for you.
Despite two decades of deeply technical work — most famously the Jepsen project that exposed correctness bugs in nearly every major database — Kingsbury finds himself filtered through the same corrupted pipelines as everyone else. His implicit argument is that if even a reputation like his cannot route around LLM-saturated gatekeeping, the problem is structural rather than a matter of candidate quality or personal branding.
Kyle Kingsbury — better known as Aphyr, the engineer behind the Jepsen distributed-systems testing project that has embarrassed roughly every database vendor on earth — published a long, sharp post titled *The Future of Everything Is Lies, I Guess: New Jobs*. It is, on its surface, a personal note: he's looking for work. Underneath, it is a field report from inside a hiring market where almost every signal a hiring manager used to rely on has been corrupted by generative AI in the span of about eighteen months.
The post landed on the Hacker News front page with 223 points and several hundred comments within hours, almost entirely from engineers and hiring managers nodding along. Kingsbury describes resumes that pattern-match perfectly to job descriptions because they were generated against the job description; cover letters in a uniform LLM cadence; take-home assignments returned with code the candidate cannot read aloud; and live coding sessions where the candidate's eyes flick to a second monitor every time a question is asked. He calls it "the lying market" — not because candidates are unusually dishonest, but because the cheapest path through every gate is now to let a model lie on your behalf.
Kingsbury isn't writing as a doomer. He's writing as someone who has spent two decades building the kind of deeply technical reputation that was supposed to be immune to this — and who is nonetheless watching his own job search get filtered through the same broken funnels as everyone else's.
The interesting thing about Aphyr's piece is not the complaint. Engineers have been complaining about LeetCode hazing and recruiter spam for a decade. The interesting thing is the structural claim: every traditional hiring signal is a low-bandwidth artifact (a PDF, a paragraph, a 90-minute screen-share), and every low-bandwidth artifact is now trivially forgeable by a $20/month API key. Resumes were always a lossy compression of a career. LLMs just made the compression artifact indistinguishable from the real thing.
The comments thread underneath the post is where the substantiation lives. One hiring manager describes interviewing 40 candidates for a senior backend role and finding that 31 of them produced syntactically perfect Python solutions to a take-home — and 28 of those 31 could not explain, in a follow-up call, what a generator expression in their own submission actually did. Another describes catching a candidate mid-interview when their answer to "why did you choose a B-tree here?" arrived three seconds after a barely-audible keyboard click. A third points out the second-order problem: the senior engineers running these interviews are themselves burning 10-15 hours a week filtering, and the good candidates are quietly opting out of pipelines that feel insulting.
Compare this to how the industry talked about Copilot in 2022. The framing then was "autocomplete on steroids" — a productivity multiplier for people who already knew what they were doing. Three years later, the framing in Aphyr's post and its comments is closer to *epistemic infrastructure collapse*. Not because the models got smarter (though they did), but because they got cheap enough and good enough that using them became the default move for anyone with mild incentive to perform competence they don't have.
This is not symmetric across the market. FAANG-tier companies with armies of recruiters and standardized onsite loops are mostly fine; small teams hiring their fifth or fifteenth engineer are getting destroyed. The asymmetry matters because that's where most of the actual interesting work — and most of Aphyr's likely audience — actually lives.
There's a quieter point Kingsbury makes that's worth pulling out: the AI-generated noise isn't just wasting time, it's actively training hiring managers to be more suspicious of *real* signal. A candidate who writes a thoughtful, idiosyncratic cover letter now reads as suspicious because LLMs are getting better at thoughtful and idiosyncratic. A clean take-home is a red flag. A messy one is also a red flag. The noise floor has risen to the point where there is no longer a clean signal to find.
If you are hiring in 2026, the practical implication is that any interview format that does not involve the candidate building or debugging real code in front of you, in real time, with you asking questions about what they just typed, is approaching zero information value. The take-home is dead. The async coding challenge is dead. The "tell me about a time you..." behavioral round was always theater and is now theater plus prompt injection. What survives is synchronous pairing — ideally on the candidate's own recent open-source work or a small, novel problem they cannot have pre-solved.
This has cost implications. A two-hour pairing interview with two engineers is roughly 4-6 engineer-hours per candidate, versus maybe 30 minutes to grade a take-home. If your top-of-funnel is 200 candidates, you cannot pair with all of them. So the filter has to move earlier — to referrals, to public work (GitHub, blog posts, conference talks, OSS commits with real history), to the kind of warm introductions that hiring teams spent the last decade trying to professionalize away in the name of fairness. The uncomfortable conclusion is that the LLM era is pushing hiring back toward exactly the network-driven, taste-based model that the structured-interview movement was designed to fix.
If you are *being* hired, the inverse is true. A public body of work — a blog with technical depth, a GitHub history with meaningful commits, a conference talk, a side project that solves a real problem — has gone from "nice to have" to roughly the only thing that distinguishes you from the synthetic majority. Aphyr himself is the proof of concept: he is looking for work with a fifteen-year public trail of Jepsen reports that no LLM can fake retroactively.
The optimistic read is that this is a transient — that hiring tooling will adapt, proctoring will improve, and the market will find a new equilibrium in eighteen months. The pessimistic read, which Aphyr leans toward, is that the equilibrium is *worse* hiring outcomes for everyone except the candidates with the largest pre-existing public footprint, and that we are watching the professional equivalent of the spam-vs-email arms race play out in compressed time. Either way, if you've been putting off building a public technical presence because it felt like vanity, the calculus just changed.
I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like sayin
Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.
The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and does
I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be direc