The editorial argues the kernel chose a 'third path' between banning AI and allowing everything: rigorous human accountability. By leveraging the existing DCO sign-off process, the policy makes contributors personally certify they understand, tested, and take responsibility for any AI-assisted code — sidesteping the IP quagmire that paralyzed other projects.
The editorial notes that kernel maintainers spent two years dealing with a rising tide of AI-generated submissions, including high-profile incidents where LLM-generated patches introduced subtle bugs or ignored kernel-specific constraints. The policy is a direct response to demonstrated quality problems, not a theoretical exercise.
By surfacing the official kernel documentation on coding assistants to the HN community (garnering 228 points and 157 comments), the submitter signals this as a landmark policy moment. The Linux kernel's outsized influence — running on Android, 90% of cloud servers — means its stance will likely shape norms across the open-source ecosystem.
The Linux kernel — the single most important open-source project on Earth, running on everything from Android phones to 90% of cloud servers — has merged an official policy document governing the use of AI coding assistants. The file, `Documentation/process/coding-assistants.rst`, now sits in Torvalds' tree alongside decades of accumulated wisdom on patch submission, coding style, and maintainer etiquette.
The document doesn't arrive in a vacuum. Over the past two years, kernel maintainers have dealt with a rising tide of AI-generated patch submissions — some useful, many not. Several high-profile incidents saw contributors submit clearly LLM-generated patches that introduced subtle bugs or failed to account for kernel-specific constraints. The new policy is not a ban on AI tools. It's something more interesting: a framework that holds humans accountable for every line of AI-assisted output.
The timing matters. With AI coding tools now embedded in most developers' workflows — GitHub Copilot alone claims 1.8 million paying subscribers — the kernel project needed to stake out a position. Rather than choosing between "ban it" and "allow everything," the kernel community chose a third path: rigorous human accountability.
The core principle is deceptively simple: you sign off on it, you own it. The kernel's existing Developer Certificate of Origin (DCO) process already requires contributors to certify that they have the right to submit code and that it's their work. The new document makes explicit that using an AI tool does not change this obligation. If you submit a patch with a `Signed-off-by` tag, you are personally certifying that you understand the code, have tested it, and take responsibility for it — regardless of whether a human or a machine wrote the first draft.
This matters because it sidesteps the intellectual property quagmire that has paralyzed other projects. Rather than trying to determine whether AI-generated code is copyrightable, who owns it, or whether training data was properly licensed, the kernel policy puts the burden squarely on the contributor. You vouch for it. Full stop.
The document also addresses a practical concern that maintainers have voiced repeatedly: quality degradation. AI tools are excellent at generating plausible-looking code that passes a cursory review but breaks under edge cases — exactly the kind of failure mode that causes kernel panics in production. The policy gives maintainers explicit cover to reject patches that show signs of AI generation without corresponding evidence of human comprehension. Maintainers don't need to prove a patch was AI-generated to reject it — they just need to determine that the contributor doesn't understand their own submission.
The Hacker News discussion (228+ points) reveals a community largely supportive of the approach but divided on enforcement. Some kernel developers argue that distinguishing AI-assisted from human-written code is already impossible and will only get harder. Others point out that the policy is less about detection and more about setting expectations: if your patch has a bug that suggests you didn't actually test it on real hardware, the AI question becomes irrelevant — it's getting rejected either way.
A recurring theme in the discussion is the contrast with corporate open-source policies. Several commenters noted that companies like Google, Microsoft, and Meta have internal AI coding policies for their open-source contributions, but none have published anything this explicit for an upstream project. The kernel's move may force other foundations — Apache, Eclipse, Linux Foundation sub-projects — to follow suit.
If you contribute to the Linux kernel — or to any project that may adopt similar policies — here's what changes practically:
Your workflow can include AI, but your review process can't delegate to it. Using Copilot or Claude to draft a patch is fine. Submitting that draft without reading every line, understanding the memory model implications, and testing on actual hardware (or at minimum, in a VM with the relevant config) is not. The bar hasn't changed; the policy just makes it explicit.
Expect this pattern to spread. The Linux kernel is the canary in the coal mine for open-source governance. When the kernel project formalizes a policy, it creates gravitational pull. Within 12 months, expect most major open-source foundations to publish similar AI contribution guidelines — and the kernel's "human accountability" model will be the template. Projects using DCO or CLA processes already have the mechanism; they just need the documentation.
For maintainers of your own projects, this is a blueprint worth studying. The kernel's approach is notable for what it *doesn't* do: it doesn't require disclosure of AI tool usage (which would be unenforceable), it doesn't ban specific tools (which would be counterproductive), and it doesn't create a separate review track for AI-assisted code (which would be unsustainable). It simply says: the same standards apply, the same accountability applies, the same consequences apply.
For teams using AI coding tools internally, the kernel's framework offers a useful principle to adopt: AI is a power tool, not a co-author. The human holding the tool is responsible for what it produces. This maps cleanly to code review processes — if a reviewer can't explain why a particular approach was chosen, it doesn't matter whether a human or GPT-4 wrote it.
The Linux kernel has always been a bellwether for how the open-source world handles hard governance questions — from licensing (GPL v2 vs v3) to code of conduct to export controls. Its AI policy will be studied, copied, and adapted by thousands of projects. The smart bet is that the "human accountability" model wins out over both outright bans and unrestricted AI usage, because it's the only approach that scales without creating an enforcement nightmare. The real test comes not from the policy document itself, but from the first time a maintainer rejects a patch from a major corporate contributor on AI-quality grounds. That's when we'll learn whether the words on the page have teeth.
Linux is founded by all these big companies. Linus couldn't block AI pushes from them forever.
> Signed-Off ... > The human submitter is responsible for: > Reviewing all AI-generated code > Ensuring compliance with licensing requirements > Adding their own Signed-off-by tag to certify the DCO > Taking full responsibility for the contribution > Attribution: ... Contributio
Glad to see the common-sense rule that only humans can be held accountable for code generated by AI agents.
This does nothing to shield Linux from responsibility for infringing code.This is essentially like a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn’t a reasonable request to make.It’s a foreseeable consequence. You don’t get
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
Basically the rules are that you can use AI, but you take full responsibility for your commits and code must satisfy the license.That's... refreshingly normal? Surely something most people acting in good faith can get behind.