The official policy document treats AI assistants identically to any other development tool. It requires no disclosure tags, no separate review queues, and no bans on specific models — only that the human signing off takes full legal and technical responsibility. The maintainers' position is that the kernel's existing multi-round mailing list review process is rigorous enough to catch problems regardless of how code was generated.
The editorial argues that the most notable aspect of the policy is its conspicuous silence on AI disclosure requirements. By not mandating an 'AI-generated' tag or creating separate review tracks, the kernel project is making a deliberate philosophical statement: code quality is judged on its own merits, not on how it was produced. This stands in contrast to other projects and organizations that have introduced mandatory AI labeling or outright bans.
The editorial emphasizes that because the Linux kernel is the most consequential open-source project — running on everything from Android phones to Mars helicopters — its governance decisions cascade downstream. Thousands of projects look to the kernel for process precedent, so this minimalist AI policy is likely to be adopted or adapted as the default stance across the open-source world.
The policy anchors accountability to the Developer Certificate of Origin: whoever adds their Signed-off-by certifies GPL-2.0 compliance and assumes full responsibility. This neatly sidesteps unresolved legal questions about AI copyright by declaring that AI cannot be an author and tools don't sign legal documents. The human contributor must review all generated code and personally vouch for its correctness and licensing.
Submitted the kernel documentation to Hacker News where it drew 337 points and 244 comments, indicating strong community interest in how the kernel frames AI-assisted contributions under existing legal accountability structures like the DCO sign-off.
The Linux kernel project has merged a new documentation file — `Documentation/process/coding-assistants.rst` — into Linus Torvalds' mainline tree. The document establishes the kernel's official position on AI-assisted contributions: you can use them, but the human whose name goes on the `Signed-off-by` line owns everything that follows.
The policy doesn't introduce any new process, tooling gates, or disclosure requirements beyond what already exists. Contributors using AI tools must review all generated code, ensure compliance with the kernel's licensing requirements (primarily GPL-2.0), add their own sign-off certifying the Developer Certificate of Origin, and take full responsibility for the contribution. The entire policy can be summarized in one sentence: AI is a tool, not an author, and tools don't get to sign legal documents.
There's no mandatory "AI-generated" tag. No special review queue. No ban on specific models or providers. The kernel's existing code review process — which is already one of the most rigorous in open source — is considered sufficient.
The Linux kernel is the most consequential open-source project in existence. It runs on everything from Android phones to AWS servers to Mars helicopters. When the kernel project establishes a policy, it becomes a de facto standard for thousands of downstream projects that look to it for governance precedent.
What makes this policy notable isn't what it says — it's what it deliberately doesn't say. It doesn't require disclosure of AI tool usage. It doesn't mandate that AI-generated patches go through additional review. It doesn't create a two-tier system where human-written code is treated differently from AI-assisted code. The kernel maintainers are betting that their existing review process — which already catches subtle bugs, style violations, and license issues through multiple rounds of mailing list review — is robust enough to handle whatever an LLM produces.
As HN commenter qsort put it: "That's... refreshingly normal? Surely something most people acting in good faith can get behind." The community reaction has been overwhelmingly positive, with most developers relieved that the policy is pragmatic rather than performative.
But the policy also has a harder edge that's easy to miss. The kernel project is explicitly pushing all liability for AI-generated code onto the individual contributor. If an LLM trained on BSD-licensed or proprietary code produces a snippet that violates GPL-2.0, the person who submitted the patch is on the hook — not the AI vendor, not the kernel project, and not the company that employs the contributor. One commenter (sarchertech) pushed back on this framing, arguing it's analogous to "a retail store saying the supplier is responsible for eliminating all traces of THC from their hemp when they know that isn't a reasonable request to make." The license laundering problem — where AI models regurgitate code from training data with incompatible licenses — remains unsolved, and this policy doesn't pretend otherwise. It just makes clear who pays the bill.
There's also a pragmatic corporate reality underneath the policy. As commenter feverzsj noted, the Linux Foundation is funded by Google, Microsoft, Meta, Intel, and dozens of other companies that are aggressively deploying AI coding tools internally. A blanket ban on AI-assisted contributions would have created an impossible compliance burden for corporate contributors who may not even know whether their IDE's autocomplete is powered by an LLM. The policy sidesteps this by refusing to distinguish between AI-assisted and unassisted code at the process level.
If you maintain an open-source project, this is worth studying as a governance template. The kernel's approach — don't create new process, reinforce existing accountability mechanisms, push liability to the contributor — is the lowest-friction option that still preserves legal clarity. You could adopt nearly identical language by updating your project's `CONTRIBUTING.md` to state that your existing DCO or CLA applies to all code regardless of how it was generated.
For developers using Copilot, Cursor, or Claude in their daily workflow, the practical takeaway is simple: if you wouldn't sign off on code you didn't understand, don't sign off on code an AI wrote that you didn't understand. The review burden doesn't change. The accountability doesn't change. The only thing that changes is the provenance of the first draft.
The license compliance angle deserves more attention than most teams give it. If you're contributing to GPL-licensed projects using AI tools, you should be aware that most major LLMs were trained on code with mixed licenses. GitHub Copilot has a filter for this (`suggestions matching public code`), but it's not perfect, and other tools offer even less protection. Running a license scanner on AI-generated patches before submission is cheap insurance — tools like `scancode-toolkit` or `licensee` can flag suspicious snippets in seconds.
The kernel's policy will almost certainly become the default template for serious open-source projects. Its core insight — that accountability is a human problem, not a tooling problem — is both legally sound and operationally practical. The projects that will struggle are the ones that try to build elaborate AI-detection or AI-disclosure mechanisms instead of simply enforcing the review standards they should have had all along. Whether the policy is sufficient to handle the license laundering risk at scale remains an open question, but it's the right starting point: clear rules, clear liability, no theater.
> Signed-Off ... > The human submitter is responsible for: > Reviewing all AI-generated code > Ensuring compliance with licensing requirements > Adding their own Signed-off-by tag to certify the DCO > Taking full responsibility for the contribution > Attribution: ... Contributio
This is the right way forward for open-source. Correct attribution - by tightening the connection between agents and the humans behind them, and putting the onus on the human to vet the agent output. Thank you Linus.
Glad to see the common-sense rule that only humans can be held accountable for code generated by AI agents.
How is one supposed to ensure license compliance while using LLMs which do not (and cannot) attribute sources having contributed to a specific response?
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
Basically the rules are that you can use AI, but you take full responsibility for your commits and code must satisfy the license.That's... refreshingly normal? Surely something most people acting in good faith can get behind.