Argues that while Anthropic's and GitHub's terms of service assign output ownership to users, US copyright law requires human authorship — a requirement reinforced by the Copyright Office's 2023 guidance and the Thaler v. Perlmutter ruling. Contract language cannot create copyright where the law says none exists, meaning verbatim AI-generated code may sit in a legal no-man's land.
Frames AI code ownership as a spectrum: accepting 200 lines of verbatim AI-generated CRUD is likely uncopyrightable, while using AI suggestions as fragments within a human-designed architecture may qualify for protection. The key variable is the level of meaningful human creative input in shaping the final output.
With AI tools now generating 30-60% of committed code at some organizations, the editorial argues this isn't a theoretical future problem but a current liability hiding in plain sight. Companies relying on AI-generated code may be building products on a foundation they cannot legally protect from competitors copying it wholesale.
A detailed legal analysis titled "Who owns the code Claude Code wrote?" hit the top of Hacker News this week, scoring over 300 points and sparking a debate that most engineering teams have been quietly ignoring. The question isn't hypothetical. With AI coding tools now generating 30-60% of committed code at some organizations, the ownership gap isn't a future problem — it's a current liability hiding in plain sight.
The article dissects the legal frameworks governing AI-generated code, focusing on Anthropic's Claude Code as a case study — though the analysis applies equally to GitHub Copilot, Cursor, and every other LLM-powered coding tool. The core tension: your AI tool's terms of service say you own the output, but US copyright law may say nobody does.
Anthropics's usage policy states that users retain ownership of outputs generated through their products. GitHub's Copilot terms are similar. But these contractual provisions bump against a fundamental constraint: copyright law in the United States requires human authorship, and no amount of contract language can create a copyright interest where the law says none exists.
The US Copyright Office has been unusually clear on this point, which is rare for an agency that usually communicates in carefully hedged footnotes. In its 2023 guidance on AI-generated works, the Office stated that works produced by AI without meaningful human creative input are not copyrightable. The landmark *Thaler v. Perlmutter* decision reinforced this: when Stephen Thaler tried to register a copyright for an AI-generated image, the court ruled that human authorship is a bedrock requirement of copyright protection.
For code, this creates a spectrum. On one end: you write a detailed prompt, Claude Code produces 200 lines of boilerplate CRUD, you accept it verbatim. That output likely isn't copyrightable. On the other end: you write the architecture, Claude suggests fragments, you heavily refactor and integrate the result into a larger human-authored system. That composite work likely is copyrightable — but only the human-authored portions.
The uncomfortable middle ground is where most real-world usage falls: you prompted it, you reviewed it, you maybe tweaked a variable name, and you shipped it. That level of human involvement probably isn't enough to clear the authorship bar. The Copyright Office has compared it to a person who gives instructions to a commissioned artist — the person giving the instructions isn't the author, the person executing them is. When the executor is an AI, there may be no author at all.
The Hacker News discussion surfaced a common misconception: "Anthropic's terms say I own the output, so I'm covered." This confuses contractual rights with statutory rights. Anthropic can promise not to claim ownership of Claude Code's output — and that promise is meaningful as between you and Anthropic. But Anthropic cannot grant you a copyright that copyright law doesn't recognize.
Think of it like a quitclaim deed in real estate. Someone can quitclaim you a piece of the moon. The document is valid as a contract. But it doesn't actually convey ownership of the moon, because the grantor never owned it. When Anthropic assigns you rights to AI-generated output, they may be assigning you rights to something that has no copyright protection — a legal nullity dressed up in reassuring contract language.
Several HN commenters with legal backgrounds noted an additional wrinkle: the work-for-hire doctrine, which is how most employer-employee code ownership works, doesn't apply here either. Work-for-hire requires either an employer-employee relationship or a specially commissioned work under a written agreement — and AI models are neither employees nor independent contractors under current law. So the corporate fallback of "we own everything our people produce" doesn't automatically extend to what their AI tools produce.
This ownership uncertainty creates a specific headache for open source. If AI-generated code isn't copyrightable, it can't be meaningfully licensed under open source licenses — because those licenses are copyright licenses. You can't attach a GPL or MIT license to something you don't hold a copyright on.
The practical consequence: if a substantial portion of contributions to an open source project are AI-generated, the copyright status of the whole project becomes murky. An adversarial actor could argue that AI-generated portions are in the public domain, potentially undermining the copyleft protections the project depends on. This isn't theoretical — the Linux kernel's Developer Certificate of Origin already asks contributors to certify they have the right to submit the code, and AI-generated code puts that certification on shaky ground.
The Hacker News discussion was most useful when it shifted from "who owns this" to "what should I actually do." Several practical strategies emerged:
Trade secrets are your strongest protection. Unlike copyright, trade secret protection doesn't require human authorship. If your AI-generated code provides commercial value and you take reasonable steps to keep it confidential, trade secret law applies regardless of whether a human or an AI wrote it. The catch: you need to actually treat it as a trade secret. That means access controls, NDAs, and not pushing it to a public GitHub repo.
Document the human contribution. Teams that want copyright protection should build practices that increase and document human involvement. Don't just accept raw Claude output. Refactor it. Integrate it into a human-designed architecture. The more you can demonstrate that the final code reflects human creative choices — structure, algorithm selection, optimization decisions — the stronger your copyright claim.
Contractual protection fills gaps. Even if copyright doesn't apply, contracts do. Your employment agreements, client contracts, and vendor agreements can all include provisions about AI-generated code. These won't give you copyright, but they give you enforceable contractual rights against specific parties.
Watch the patent angle. Code that isn't copyrightable may still be patentable if it implements a novel and non-obvious process. Patent law has its own authorship requirements, but the USPTO has indicated that AI can be used as a tool in the inventive process as long as a human made a significant contribution to the invention.
If your team uses Claude Code, Copilot, or Cursor — and statistically, they do — you have an IP hygiene problem that your legal team may not have caught up to yet.
First, audit your AI-generated code ratio. If you're in a regulated industry or planning an acquisition or IPO, expect due diligence questions about what percentage of your codebase was AI-generated and what IP protection it carries. Having no answer is worse than having a complicated one.
Second, establish a contribution workflow that defaults to substantial human modification. This doesn't mean rejecting AI tools — it means using them as first-draft generators rather than finish-line producers. The developer who reviews, refactors, and integrates AI output is creating a stronger ownership position than one who prompts and commits.
Third, if trade secrets are your primary protection, act like it. That means internal codebases stay internal. It means access controls. It means thinking carefully before open-sourcing code that was substantially AI-generated.
Congress, the Copyright Office, and the courts are all moving on this — slowly. The Copyright Office has an open rulemaking on AI-generated works. Several bills have been introduced. But legislation takes years, and the technology is moving in months. The most likely near-term outcome is more guidance from the Copyright Office establishing a sliding scale of human involvement, not a binary own-it-or-don't test. In the meantime, the developers and teams that treat AI-generated code as legally distinct from human-written code — and build their workflows accordingly — are the ones who'll have the fewest surprises when the rules finally crystallize.
> The US Copyright Office confirmed this in January 2025, and the Supreme Court declined to disturb it in March 2026 when it turned away the Thaler appeal. Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection, and that rule is now settl
Personally, I think that the human directing the agent owns the copyright for whatever is produced, but the ability for the agent to build it in the first place is based off of stolen IP.I'm concerned about the copyright 'washing' this enables though, especially in OSS, and I think th
I want this question to have an interesting answer, but everyone knows that if this question ever goes to the courts, ownership will go to the people in charge with the money. The idea that Anthropic may not own Claude Code just because Claude wrote it is wishful thinking.
This is the same shape as the image cases.Zarya of the Dawn already settled it for Midjourney output: human-written elements were protected, AI-generated images were not. The character design didn't get copyright even though the human picked, prompted, and curated. Code isn't different. Pr
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
I find it distasteful and disturbing that copyright infringement by the people training the LLM in violation of a license is considered contamination by the licensed code. It’s not contamination. The code didn’t seep into your codebase. If the LLM was trained in such a way that portions of code long