Willison wrote up the Zig project's rationale, framing it as a practical cost-shifting problem: AI-generated code transfers the burden of quality assurance from the contributor to the maintainer. For a small team like Zig's, absorbing that review cost is unsustainable.
Submitted the article to Hacker News where it garnered 319 points, indicating strong community resonance with the argument that a small compiler team cannot afford to serve as a quality filter for machine-generated output when every review hour is zero-sum against development time.
The Zig project's policy intentionally extends beyond code to documentation, issue reports, and commit messages. The rationale is that LLM-generated text in any form introduces the same fundamental problem: output that appears plausible but hasn't been verified through genuine understanding of the codebase and its invariants.
The editorial argues that a 10x increase in pull requests is meaningless if each requires 3x the review effort and half miss subtle invariants. The core issue is that AI-generated contributions look superficially competent but shift the real intellectual work — understanding edge cases, verifying correctness, reasoning about the codebase — onto reviewers who are already resource-constrained.
The Zig programming language project, led by creator Andrew Kelley, has drawn renewed attention for its firm stance against AI-generated contributions. The project's contributing guidelines explicitly prohibit the use of LLM-generated code, documentation, and commit messages in pull requests. Simon Willison, a prominent voice in the AI-and-software intersection, wrote up the rationale behind Zig's position, sparking a 300+ point discussion on Hacker News.
This isn't a new policy — Zig has maintained this position for some time — but the detailed public explanation of *why* has crystallized it into one of the clearest anti-AI-contribution stances in open source. The ban isn't ideological posturing; it's a resource allocation decision by a project that can't afford to subsidize the gap between "AI-generated" and "production-ready."
The policy covers not just code but also documentation contributions, issue reports, and commit messages. The scope is intentionally broad: if an LLM produced it, the Zig project doesn't want it in their review queue.
The core argument is deceptively simple and devastatingly practical. AI-generated code shifts the cost of quality from the contributor to the reviewer. When a human writes code, they've already done the work of understanding the codebase, reasoning about edge cases, and verifying their changes make sense. When an LLM generates a pull request, much of that verification burden transfers to the maintainer who has to review it.
For a project like Zig — with a small core team maintaining a compiler, standard library, and self-hosted bootstrap — every hour of review time is zero-sum against actual development. Andrew Kelley has been transparent about this calculus: the project doesn't have the bandwidth to serve as a quality filter for machine-generated output. A 10x increase in PRs means nothing if each one requires 3x the review effort and half of them miss subtle invariants an LLM can't reason about.
There's also the copyright question, which remains genuinely unresolved. AI-generated code exists in a legal gray zone. The U.S. Copyright Office has indicated that purely AI-generated works aren't copyrightable, but the line between "AI-generated" and "AI-assisted" is blurry. For a project that cares about the legal clarity of its codebase — and Zig, with its focus on replacing C in systems software, absolutely should — accepting code of uncertain provenance is a real risk.
Critics of the policy make two counterarguments worth taking seriously. First, the enforcement problem: how do you actually detect AI-generated code? The answer is you often can't, and the policy relies substantially on contributor honesty. Zig's maintainers acknowledge this but argue that a clear policy creates a social norm, even if it can't be perfectly enforced. The goal isn't a perfect filter — it's a signal that tells contributors "we expect you to have done the thinking, not just the typing."
Second, there's the productivity argument. Developers increasingly use LLMs as sophisticated autocomplete — generating boilerplate, suggesting API patterns, translating between languages. Drawing a bright line between "I wrote this" and "an AI wrote this" is increasingly artificial. Some contributors argue that banning AI assistance is like banning Stack Overflow or IDE autocomplete — a distinction without a meaningful difference.
Zig sits at one end of an emerging spectrum. At the other end, projects like Rust and many in the JavaScript ecosystem have no formal policy, implicitly accepting AI-assisted contributions as long as they pass review. In the middle, some projects require disclosure — a "this PR used AI assistance" checkbox — without outright banning the practice.
The Linux kernel's approach has been characteristically pragmatic: contributors must certify they have the right to submit the code under the Developer Certificate of Origin, and maintainers review on quality regardless of how the code was produced. The implicit message is "we don't care how you wrote it, we care that it's correct and you're legally responsible for it."
Zig's position is notable because it's one of the few projects that explicitly argues the process of writing code matters, not just the output. Kelley's view is that a contributor who understands the codebase well enough to make a meaningful change doesn't need an LLM to write it, and a contributor who needs an LLM to write it probably doesn't understand the codebase well enough to make a meaningful change. It's a tautology with teeth.
This philosophy aligns with Zig's broader design ethos. The language itself is famously opinionated about explicitness — no hidden control flow, no operator overloading, no implicit allocations. A language that refuses to hide complexity from the programmer is, philosophically at least, consistent in refusing to hide the origin of its contributions.
If you maintain an open-source project, Zig's policy is worth studying even if you don't adopt it. The key insight isn't about AI specifically — it's about the asymmetry between contribution cost and review cost. Any change that makes it cheaper to *submit* code without making it cheaper to *review* code creates a maintainer tax. AI just makes this dynamic dramatically worse.
For contributors to projects with anti-AI policies, the practical implication is clear: use AI tools for your own learning and exploration, but write your contributions yourself. If you can't articulate why every line of your PR exists, you're not ready to submit it — regardless of who or what produced it.
For the broader ecosystem, Zig's stance is a leading indicator of a governance question every project with >10 contributors will need to answer in the next 12 months: what is our policy on AI-generated contributions? The spectrum runs from "banned" to "required disclosure" to "don't care, pass review." Each position has tradeoffs, and "we haven't thought about it" is becoming an increasingly untenable default.
Zig's policy will face mounting pressure as AI coding tools become more capable and more deeply integrated into developer workflows. The distinction between "AI-generated" and "AI-assisted" will continue to blur. But the underlying principle — that maintainers have a right to set the terms of engagement for their own projects, including terms that prioritize reviewer bandwidth over contributor convenience — is unlikely to weaken. If anything, as AI-generated PRs flood more projects, expect more maintainers to look at Zig's approach and think: they had a point.
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bu
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."A healthy contributor community is more important than mere code performanc
My issue with AI-generated OSS contributions is:If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:>
I think it's the least hostile thing they can say, and I respect their decision for their own project.That said, it still feels like they are unnecessarily hobbling their project. LLMs are tools and they can help you think, research, and code. You can overuse them, yes, but you should embrace t
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
From https://kristoff.it/blog/contributor-poker-and-ai/:"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alon