The editorial argues this has three layers, each worse than the last. The anti-abuse system worked as designed — it detected a string and reclassified billing. The architecture that silently reroutes paying customers from plan quota to metered billing based on content matching is the real problem, not just the specific string on the blocklist.
Filed a meticulously documented bug report demonstrating that despite having 87% of his weekly Max 20x quota remaining, sessions were charged to extra usage credits. Through systematic binary search he proved the exact trigger — the case-sensitive string 'HERMES.md' in git commit history — showing the billing reclassification was content-driven and invisible to the user.
Alexander burned through $200.98 in extra usage credits while his plan quota sat nearly untouched at 87% remaining. The charges were caused entirely by Anthropic's server-side content filter misrouting his requests, yet the company refused to issue a refund for what was unambiguously their error.
Surfaced the issue to the broader developer community with the framing that Anthropic both caused the erroneous charges and then refused to make the affected user whole, drawing significant attention with 630 points and 242 comments.
The editorial emphasizes that Claude Code includes recent git commit messages in its system prompt construction, meaning the anti-abuse system is making billing decisions based on inspecting user code and repository history. The file 'HERMES.md' didn't even need to exist — just appearing anywhere in git history was enough — revealing how deeply the tool scans user content with no transparency about what triggers reclassification.
Through exhaustive binary-search debugging, Alexander demonstrated that only the exact case-sensitive string 'HERMES.md' triggered the reclassification — not lowercase, not different extensions, not the word alone. This surgical specificity strongly suggests an internal Anthropic project name or benchmark reference was inadvertently added to a production abuse-detection blocklist.
On April 25, a Claude Code user named Alexander (GitHub: sasha-id) filed [issue #53262](https://github.com/anthropics/claude-code/issues/53262) with a meticulously documented bug report. Despite being on Anthropic's Max 20x plan ($200/month) with 87% of his weekly quota still available, his Claude Code sessions were being charged against extra usage credits — the pay-as-you-go pool that kicks in *after* plan limits are exhausted. He burned through $200.98 before realizing something was wrong.
The root cause, uncovered through what Alexander described as a "systematic binary search" across repos and branches, was almost absurdly specific: if any git commit message in your repository's history contained the case-sensitive string `HERMES.md`, Anthropic's server-side anti-abuse system would flag the request and route it to metered billing instead of your plan quota. Lowercase `hermes.md` didn't trigger it. `HERMES.txt` didn't trigger it. `HERMES` alone didn't trigger it. Only the exact string `HERMES.md` — possibly an internal Anthropic project name or benchmark reference that leaked into their content filter.
The file didn't even need to exist. The string just had to appear somewhere in your git history, because Claude Code includes recent commit messages in its system prompt construction.
This story has three layers, each worse than the last.
Layer 1: Content-based billing reclassification. Claude Code is inspecting the content of your codebase — specifically your git commit messages — and making billing decisions based on what it finds. This isn't a bug in the traditional sense of "code did something unintended." The anti-abuse system worked exactly as designed: it detected a string it considered suspicious, and it reclassified the billing tier. The bug was in what strings were on the blocklist. The architecture that silently reroutes paying customers from plan quota to metered billing based on content matching — that's working as intended, and that should concern every Claude Code user.
Layer 2: Zero user visibility. There was no warning, no notification, no flag in the UI. Alexander's projects simply became "completely unusable" once his extra usage credits were exhausted. The error message — "You're out of extra usage" — gave no indication that content-based routing was the cause. If he hadn't been technically curious enough to perform a binary search across repos, this would have looked like a normal overage.
Layer 3: The refund refusal. Within six minutes of filing the issue, Alexander posted a screenshot of Anthropic's support response. It read, in part: *"I sincerely apologize for the disruption you experienced with the billing routing issue... However, I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing."*
Read that again. Anthropic has an explicit policy against refunding customers for billing errors caused by their own technical failures. The support agent acknowledged the fault and refused compensation in the same breath. This isn't a rogue support rep — it's a policy.
The community response was swift and unified. Alexander's follow-up comment — "Can I get my refund now" — collected 334 thumbs-up reactions on GitHub. One commenter noted that refunding $200 would be "the easiest PR win in the world." Another pointed out that three duplicate issues (#53171, #45020, #29704) were linked, suggesting more users hit the same wall. As of this writing, Anthropic has not responded to the refund request.
The deeper question here isn't about $200. It's about what happens when AI providers build content-inspection systems that affect billing.
Claude Code operates inside your development environment. It reads your files, your git history, your terminal output. That's the value proposition — it needs context to be useful. But when the same infrastructure that reads your code also decides how much you pay for it, you've created a system where the content of your work determines your bill, with no transparency about the rules.
We don't know what other strings trigger Anthropic's anti-abuse filter. We don't know how many users have been silently reclassified. The speculation in the GitHub thread — that this could have "collectively cost Anthropic's customers millions" — is unverifiable, but the architecture makes it plausible. Any user whose git history happened to contain `HERMES.md` was being silently overcharged, with no mechanism to detect or dispute it until credits ran out.
Anthropic collaborator Boris Cherny closed the issue roughly 10 hours after it was filed with a one-line response: *"Thanks for the report! This was an overactive anti-abuse system. Fixed."* The speed of the fix suggests the team understood the problem immediately. The brevity of the response suggests they'd rather not discuss the architecture behind it.
If you're using Claude Code (or any AI coding tool that operates inside your dev environment), here's what to take away:
Audit your billing. Check whether your actual usage matches your plan's included quota. If you're on a Max plan and seeing extra usage charges, investigate whether specific repos trigger different billing behavior. Alexander's binary search methodology — testing orphan branches, isolating commit messages — is a template for debugging this class of problem.
Watch for silent reclassification. The broader pattern here applies to every AI API provider: if the service inspects your content and makes server-side decisions you can't observe, you have no way to verify you're being billed correctly. This is different from traditional cloud billing, where you can at least see which resources are running. With content-based routing, the billing logic is a black box.
Demand transparency on anti-abuse systems. If a provider's anti-abuse filter can affect your billing tier, the filter rules (or at least their scope) should be documented. "We have an anti-abuse system" is not sufficient disclosure when that system can silently multiply your costs.
For teams evaluating Claude Code for production use, the refund policy is arguably a bigger red flag than the bug itself. Bugs happen. An explicit policy against refunding billing errors caused by your own infrastructure failures is a choice — and it tells you something about how disputes will be resolved when the stakes are higher than $200.
Anthropic is moving fast to build Claude Code into a primary development tool, with pricing tiers designed for heavy daily use. That ambition requires trust — trust that the billing is accurate, that the tool isn't making hidden decisions about your usage, and that mistakes will be made right. A $200 refund is trivial for a company valued at $61.5 billion. The precedent of refusing it is not. The three linked duplicate issues suggest this conversation isn't over, and the 630-point Hacker News thread ensures it won't be forgotten quickly.
Hey everyone, Thariq from the Claude Code team.We've been on this since the bug surfaced. Everyone affected is getting a full refund and an extra grant of usage credits equal to their monthly subscription as our apology. You can see my original post here: https://x.com/trq212
"I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing."Not sure I've ever seen a company openly take this position. This is a crazy policy.
I recently had my automatic reload double charge me $100. I tried reaching out to Anthropic, but my only option (of course) was a chat agent. After going through a conversation with it, I was told someone would reach out to help with the matter. Never happened. I eventually reached out to my credit-
https://x.com/trq212/status/2048495545375990245He is getting a refund along with an additional $200 credit from what I can see.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
> However, I need to let you know that we are unable to issue compensation for degraded service or *technical errors* that result in incorrect billing routing.This is very surprising. I've never seen a legitimate business not give refunds for technical errors of their own fault. Minimum Anth