VS Code Slapped Copilot's Name on Your Commits — Whether You Used It or Not

5 min read 1 source clear_take
├── "This is data corruption of the audit trail, not just a cosmetic annoyance"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues that git commits are forensic records, not marketing collateral. In regulated industries, commit metadata is part of the compliance audit trail answering 'who wrote this code and when' — injecting a false co-author constitutes data corruption that undermines security reviews and legal discovery.

├── "This reflects how normalized AI-washing has become inside Microsoft's developer tools organization"
│  └── top10.dev editorial (top10.dev) → read below

The editorial highlights that the approving developer framed false attribution as 'functionality that some customers expect,' revealing how AI credit inflation has become an unquestioned norm within Microsoft. The fact that this passed review without anyone flagging the attribution integrity problem shows a systemic cultural blind spot.

├── "The change was an honest mistake in review process, not a malicious corporate strategy"
│  └── @Dmitriy Ivanov (GitHub PR #310226) → view

Ivanov publicly apologized on the PR thread, stating there was 'no ill intent by evil corporation' but rather a desire to support functionality some customers expect. He acknowledged the mistake of turning the feature on by default without sufficient upfront validation.

└── "The review process failed so badly that even the AI tool showed better judgment than the humans"
  ├── top10.dev editorial (top10.dev) → read below

The editorial notes that GitHub Copilot itself reviewed the PR and flagged problems — noting behavioral inconsistency and suggesting a revert — but this feedback was ignored. This ironic situation demonstrates a structural failure in Microsoft's code review process where automated guardrails were overridden by a single approver.

  └── @indrora (Hacker News, 1286 pts) → view

By surfacing this PR to Hacker News where it received 1286 points and 672 comments, indrora highlighted the community outrage over VS Code inserting Co-Authored-By Copilot into commits regardless of whether Copilot was actually used — framing it as a systemic trust violation in developer tooling.

What Happened

A pull request merged into Visual Studio Code's main branch changed a single default value — and ignited one of the angriest developer backlashes of 2026. [PR #310226](https://github.com/microsoft/vscode/pull/310226) flipped the `github.copilot.chat.commitMessageGeneration.instructions` configuration so that every commit made through VS Code would automatically include a `Co-Authored-By: GitHub Copilot` trailer in the commit message. Not just commits where Copilot generated or suggested code. Every commit. Even if Copilot was never invoked, never installed, never wanted.

The change was approved by a single Microsoft developer, Dmitriy Ivanov, who has since posted a public apology on the PR thread: "I am the person who approved this PR and would like to acknowledge and apologize for the mistake of turning this feature on by default without sufficient upfront validation. There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code." The framing of false attribution as 'functionality that some customers expect' reveals how normalized AI-washing has become inside Microsoft's developer tools org.

In a moment of almost poetic irony, GitHub Copilot itself reviewed the PR and flagged problems with it — noting that the change didn't actually modify the runtime behavior consistently, created inconsistency in the codebase, and suggested reverting. That feedback was ignored. When your own AI tool has better editorial judgment than your review process, something has gone structurally wrong.

Why It Matters

Git commits are not marketing collateral. They are the forensic record of a codebase. In regulated industries, commit metadata is part of the audit trail — it answers the question 'who wrote this code and when' for compliance, security reviews, and legal discovery. Injecting a false co-author into that record isn't a cosmetic annoyance; it's data corruption.

Consider the downstream effects. A security team investigating a vulnerability traces it to a commit with a Copilot co-author tag. They now have to determine: did AI actually generate this code? Is this a known class of AI-introduced bug? Should they scan for similar patterns across all Copilot-tagged commits? Except none of that analysis is valid, because the tag was a lie. The developer wrote every character by hand. The metadata just says otherwise because Microsoft wanted to inflate its AI engagement numbers.

As HN commenter yankohr put it: "This feels like the modern version of 'Sent from my iPhone' but much more invasive. Git commits are legal and technical records. Falsifying who authored a piece of code just to pump up AI usage stats is a huge breach of trust." The iPhone signature comparison is apt but understates the problem. Email signatures are visually obvious noise that everyone learned to ignore. Commit trailers are machine-parsed metadata that tools, scripts, and compliance systems actively consume.

The community reaction has been unusually unified in its anger. Veteran developer artyom offered the longer view: "To everyone who bought the 'developer-friendly' Microsoft of VSCode fame from a few years ago: this is what they forever did, and forever will do. This company has been pulling these tricks since the early 90s." Whether or not you share that level of cynicism, the pattern is hard to ignore. Microsoft has now established a clear precedent: default-on attribution for AI tools, opt-out for developers who notice — a dark pattern borrowed from adtech and applied to version control.

The Consent Problem

This incident sits at the intersection of two larger trends that should concern every engineering organization.

First, the erosion of meaningful defaults. Software defaults are the most powerful design decision a platform makes — research consistently shows that the vast majority of users never change them. When VS Code ships with Copilot co-authorship turned on by default, they're not offering a feature. They're manufacturing consent. The developer who doesn't read every settings changelog (which is most developers) now has false attribution in their commit history. Opt-out consent for commit attribution is fundamentally incompatible with the trust model that version control is built on.

Second, the AI metrics arms race. Every major tech company is under intense pressure to show AI adoption numbers to investors and boards. GitHub reports Copilot usage statistics. Those statistics look better when every VS Code commit carries a Copilot tag. The incentive structure here is straightforward and ugly: the people deciding defaults have a business interest in inflating the very metrics those defaults affect.

There's also a subtler technical problem that the Copilot review bot itself caught. The configuration schema default was changed to `"all"`, but the actual runtime behavior wasn't consistently updated across all code paths. This means some users might see the co-author tag and others might not, depending on which commit flow they use — creating a situation where the attribution is not just false but *randomly* false. Debugging inconsistent metadata is strictly worse than debugging consistently wrong metadata.

What This Means for Your Stack

If you're running VS Code (and statistically, you probably are — it holds roughly 70% IDE market share among professional developers), here's what to do right now:

1. Check the setting. Look for `github.copilot.chat.commitMessageGeneration.instructions` in your VS Code settings. If it's set to `"all"`, change it to `"none"` or `"active"` depending on whether you actually want Copilot attribution when you genuinely use it.

2. Audit recent commits. Run `git log --format='%H %an %s' | head -50` and check for unexpected `Co-Authored-By` trailers in your recent work. If you find false attributions, consider an interactive rebase to clean them — especially if your organization has compliance requirements around commit metadata.

3. Set organizational policy. If you manage a team, this is a good moment to establish explicit guidelines about AI attribution in commits. Some organizations want accurate AI co-authorship tracked; others want none. Either is fine. What's not fine is letting a vendor decide for you via a silent default change.

For teams in regulated environments — finance, healthcare, defense — false commit attribution isn't just an annoyance, it's a potential compliance violation that auditors will eventually flag. Talk to your compliance team now, before they find Copilot's name on commits that touched PII handling or cryptographic implementations.

Looking Ahead

The apology from the PR approver suggests this specific default will be reverted, and the backlash may accelerate that timeline. But the underlying tension isn't going anywhere. Every AI-integrated tool faces the same temptation: attribute more to AI than AI actually did, because the business model depends on proving AI's value. Developers need to treat AI attribution settings the same way they treat telemetry settings — as something that defaults to the vendor's interest, not yours, and should be explicitly configured on day one. The era of trusting your editor's defaults ended the moment your editor started taking credit for your work.

Hacker News 1394 pts 760 comments

VS Code inserting 'Co-Authored-by Copilot' into commits regardless of usage

→ read on Hacker News
rsynnott · Hacker News

One fascinating thing about the whole AI phenomenon is how incredibly hostile it is to _standards_. Whether something works properly, or is ethical, or is true, no longer matters at all; all that matters is "pls use our AI".Microsoft spent literal decades rehabilitating their reputation. A

yankohr · Hacker News

This feels like the modern version of 'Sent from my iPhone' but much more invasive. Git commits are legal and technical records. Falsifying who authored a piece of code just to pump up AI usage stats is a huge breach of trust and it is disappointing to see Microsoft prioritize branding ove

dmitriv · Hacker News

I am the person who approved this PR and would like to acknowledge and apologize for the mistake of turning this feature on by default without sufficient upfront validation.There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code

starkeeper · Hacker News

They do not care about customers because they are the customer and users are hostages. They only care about hostage count and other shitty metrics.

ddkto · Hacker News

The best part is that copilot commented on the PR saying that this doesn’t actually change the behaviour, creates inconsistency in the codebase and suggested reverting the change! (This comment seems to have been ignored…)> The configuration schema default was changed to "all", but the

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.