The Zig project argues that AI-generated contributions impose outsized costs on maintainers: it takes seconds to generate a PR but 30-60 minutes to review code that almost works but has subtle edge cases. Contributors who don't deeply understand their submissions can't respond to review feedback or defend architectural decisions, making the review process structurally wasteful.
Willison highlighted the Zig policy specifically because of its unusual technical precision in explaining the rationale. By amplifying the policy, he signals agreement that the maintainer-burden argument deserves serious attention from the broader open-source community.
Submitted the policy to Hacker News where it received 622 points and 405 comments, indicating strong community resonance with the maintainer-burden framing. The high engagement suggests this argument struck a nerve with developers who have experienced similar review asymmetries.
The editorial contextualizes the AI ban within Zig's track record of principled engineering decisions — from bootstrapping the compiler off LLVM to opinionated build system stances. The argument is that saying no to seemingly free contributions is characteristic of a project that consistently prioritizes long-term maintainability over expedience.
Beyond the review burden, the Zig policy emphasizes that contributors who use AI can't meaningfully respond to review feedback or defend architectural decisions. The expectation is that contributors understand every line because they wrote it — this is framed not as anti-technology but as a quality bar for meaningful participation in the project.
The Zig programming language project — one of the most closely watched systems languages in active development — has published a detailed rationale for their firm policy against AI-generated contributions. The policy, highlighted by Simon Willison on April 30, 2026, doesn't just say "no AI" — it explains, with unusual technical precision, *why* AI-generated pull requests are structurally harmful to open-source maintainership.
The Zig project, led by Andrew Kelley, has been building a reputation for principled engineering decisions — from their choice to bootstrap the compiler off LLVM, to their opinionated stance on build systems. This anti-AI policy is consistent with Zig's broader philosophy: optimize for long-term project health, even when it means saying no to seemingly free contributions.
The policy applies broadly: contributors must not submit code that was substantially generated by large language models, AI coding assistants, or similar tools. This covers GitHub Copilot, ChatGPT, Claude, and any other LLM-based code generation. The expectation is that contributors understand every line they submit because they wrote it.
The Zig team's rationale cuts deeper than the usual "AI code is bad" dismissal. Their argument is fundamentally economic: every AI-generated PR that looks plausible but contains subtle issues imposes an outsized review burden on maintainers who are already time-constrained. The asymmetry is brutal — it takes 30 seconds for someone to generate a PR with an AI tool, but it can take a maintainer 30-60 minutes to properly review code that *almost* works but has edge cases the AI didn't understand.
This is the maintainer equivalent of a denial-of-service attack. Not malicious, but structurally identical in effect. When a contributor doesn't deeply understand the code they're submitting, they can't respond meaningfully to review feedback, they can't defend architectural decisions, and they can't maintain the code after it lands. The Zig team has apparently seen enough of these drive-by PRs to codify their stance.
The argument also touches on something more subtle: AI-generated contributions erode the signal that a PR represents. In a healthy open-source project, a contribution is a signal that someone cared enough to understand the codebase, identify a real problem, and craft a solution. When AI lowers the cost of producing a PR to near-zero, that signal collapses — and maintainers lose the ability to distinguish genuine contributions from noise.
Simon Willison, who has been one of the most thoughtful voices on AI's intersection with software development, highlighted this policy — and the framing matters. Willison is not anti-AI. He's built significant tooling around LLMs. But he recognizes that the Zig team is articulating a legitimate structural concern that most projects haven't thought through carefully.
Not everyone agrees with Zig's position, and the strongest counterargument deserves a fair hearing. Many experienced developers use AI as a sophisticated autocomplete — a tool that accelerates implementation of ideas they already understand. Banning AI-generated code wholesale makes no distinction between a contributor who used Copilot to save keystrokes on boilerplate they could write from memory, and someone who prompted "fix this Zig compiler bug" and copy-pasted the output.
The enforcement problem is real too. How do you actually detect AI-generated code? Style analysis is unreliable. Asking contributors to self-report creates an honor system. Some developers argue that the policy is effectively unenforceable and will primarily deter honest contributors who self-disclose, while those who don't mention their AI usage sail through.
There's also the question of where to draw the line. If a developer reads an AI-generated explanation of a Zig compiler internals concept, then writes their own code informed by that understanding, is that an AI-generated contribution? What about using AI to find a bug's location, then manually writing the fix? The boundary between "AI-assisted thinking" and "AI-generated code" is genuinely blurry.
Projects like the Linux kernel have taken a more nuanced approach — requiring contributors to certify they understand and can maintain their contributions via the Developer Certificate of Origin, without outright banning the tools used to produce them. This puts the emphasis on accountability rather than tooling.
If you maintain an open-source project, Zig's policy forces you to think about your own stance. The status quo — ignoring the question and dealing with AI-generated PRs ad hoc — is increasingly untenable. The volume of low-quality AI-generated contributions is rising across the ecosystem, and projects that don't have a clear policy end up spending maintainer time on meta-discussions about individual PRs.
You don't have to adopt Zig's exact position. But you should have *a* position. Some practical options: require contributors to affirm they can maintain their code, add a contribution guide section on AI tooling expectations, or implement a "defend your PR" step where contributors must explain their approach in their own words. The worst option is silence — it invites the drive-by PRs that burn out your maintainers.
For individual developers contributing to projects with anti-AI policies, this is straightforward: respect the project's rules. If you use AI tools in your personal workflow, switch them off for these contributions. If a project's contribution requirements feel too restrictive, contribute elsewhere. Open-source maintainers get to set the terms.
Zig's policy will likely become a reference point for the broader open-source community's reckoning with AI-generated contributions. We're still in the early innings of figuring out the norms. The projects that articulate clear, reasoned policies — whether permissive or restrictive — will fare better than those that muddle through. Expect more major projects to publish explicit stances in the coming months, likely landing across a spectrum from Zig's hard ban to more permissive "certify your understanding" models. The real test isn't the policy text — it's whether projects can maintain code quality and contributor relationships as AI tooling becomes ubiquitous in every developer's workflow.
Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bu
It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."A healthy contributor community is more important than mere code performanc
My issue with AI-generated OSS contributions is:If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:>
I think it's the least hostile thing they can say, and I respect their decision for their own project.That said, it still feels like they are unnecessarily hobbling their project. LLMs are tools and they can help you think, research, and code. You can overuse them, yes, but you should embrace t
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
From https://kristoff.it/blog/contributor-poker-and-ai/:"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alon