Vercel's Claude Code Plugin Was Reading Your Prompts

4 min read 1 source clear_take
├── "Vercel's plugin violated user trust by transmitting prompt data as telemetry without informed consent"
│  └── Akshay Chugh (akshaychugh.xyz) → read

Chugh conducted a network-level investigation showing the Vercel Claude Code plugin was making outbound requests containing prompt content to Vercel-owned endpoints. His detailed technical evidence demonstrates that user prompts were being captured and transmitted without explicit user consent, crossing a clear privacy boundary.

├── "The Claude Code plugin permission model is dangerously coarse and repeats mistakes from browser extension history"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues that Claude Code plugins operate on a binary trust model with no granular distinction between reading prompts for context and transmitting them externally. This mirrors the browser extension ecosystem's early failures, which took years of data exfiltration scandals before Chrome and Firefox implemented proper permission scoping.

└── "Growth instrumentation is being prioritized over privacy boundaries in the AI plugin land grab"
  └── top10.dev editorial (top10.dev) → read below

The editorial notes that Claude Code plugins are new and the ecosystem is in a competitive 'land grab' phase. Vercel, as a major deployment platform, has natural reasons to integrate with AI tools, but the telemetry behavior suggests that product analytics and growth metrics were prioritized over establishing proper privacy boundaries with users.

What happened

A developer named Akshay Chugh published a detailed investigation showing that Vercel's plugin for Claude Code — Anthropic's AI-powered CLI coding assistant — was capturing user prompts and transmitting them to Vercel's telemetry infrastructure. The discovery, shared via a blog post that quickly reached 227+ points on Hacker News, laid out the network-level evidence: the plugin was making outbound requests containing prompt content to Vercel-owned endpoints.

The core issue isn't that telemetry exists — it's that prompt data was included in it without explicit, informed user consent. When you install a Claude Code plugin, you're granting it access to your development context. That's expected — the plugin needs context to be useful. What's not expected is that context leaving your machine and landing on a third party's servers.

The timing matters. Claude Code plugins are relatively new, and the ecosystem is in its "land grab" phase where companies are racing to build integrations. Vercel, as a major deployment platform, has natural reasons to integrate with AI coding tools. But the telemetry behavior suggests that growth instrumentation was prioritized over privacy boundaries.

Why it matters

This story sits at the intersection of three trends that every senior developer should be tracking.

First: the plugin permission model is too coarse. Claude Code plugins currently operate on a binary trust model — you either install a plugin and give it access, or you don't. There's no granular permission that distinguishes between 'this plugin can read my prompt to provide context-aware suggestions' and 'this plugin can transmit my prompt contents to an external server.' This is the same mistake browser extensions made a decade ago, and it took years of high-profile data exfiltration scandals before Chrome and Firefox implemented proper permission scoping.

For practitioners, this means every plugin you install is effectively a data pipeline you're opting into. Your prompts to Claude Code often contain proprietary code snippets, internal API designs, architectural decisions, and sometimes credentials or environment variable names. That's not abstract metadata — it's your intellectual property.

Second: telemetry norms in AI tooling are undefined. In traditional developer tools, telemetry typically means crash reports, feature usage flags, and anonymized performance metrics. VS Code's telemetry, for example, tracks which commands you invoke, not the contents of your files. But AI coding tools blur this boundary because the prompt IS the usage data — capturing what a user asked is simultaneously capturing what they're building. Vercel's plugin appears to have treated prompt data the same way a SaaS product treats search queries, without recognizing that the sensitivity level is categorically different.

The Hacker News discussion reflected this tension sharply. Developers who work at companies with strict IP policies noted that this kind of data leakage could violate their employment agreements. Others pointed out that many developers already send their code to Claude's API anyway — but there's a crucial difference between sending data to the AI provider you explicitly chose and having a third-party plugin silently forward that data to its own infrastructure.

Third: supply chain trust in AI tooling is the next frontier. We spent the last five years building awareness around npm supply chain attacks, dependency confusion, and compromised packages. AI coding tool plugins represent a new supply chain surface with even broader access — they don't just see your `package.json`, they see your thought process. A compromised or poorly-designed plugin doesn't need to inject malicious code; it just needs to read your prompts to extract valuable intelligence about what you're building.

This isn't hypothetical. Competitive intelligence firms would pay handsomely for aggregate data on what thousands of developers are asking their AI coding assistants to help them build. Even without malicious intent, the commercial incentive to collect and analyze prompt data is enormous.

What this means for your stack

If you're using Claude Code with any third-party plugins, here's what to do right now:

Audit your installed plugins. Run network monitoring (Little Snitch on macOS, Wireshark, or even a simple `tcpdump`) while using Claude Code with plugins enabled. Look for outbound requests to domains that aren't `anthropic.com` or `claude.ai`. Any plugin making requests to its own infrastructure while you're typing prompts deserves scrutiny.

Review plugin source code. Most Claude Code plugins are open source or at least inspectable. Look for telemetry endpoints, analytics SDKs, or any code that serializes prompt content and sends it over the network. The Vercel plugin's behavior was discoverable precisely because someone bothered to look.

Establish an organizational policy. If your team uses AI coding assistants, you need a plugin allowlist. Treat Claude Code plugins with the same rigor you'd apply to a CI/CD integration that has read access to your source code — because that's functionally what they are. Most security teams haven't caught up to this reality yet.

For Anthropic, this incident should accelerate work on plugin sandboxing and network permission scopes. A plugin should be able to declare "I need to read prompt context" without automatically gaining "I can make arbitrary network requests with that context." The technical mechanisms exist — browser extensions solved this years ago with content security policies and host permissions.

Looking ahead

Vercel will likely respond with a fix — either removing prompt data from telemetry or adding an explicit opt-in. But the structural problem remains. As the AI coding tool ecosystem grows, the number of plugins with prompt-level access will explode, and each one represents a potential data exfiltration vector. The companies building these plugin platforms — Anthropic, OpenAI, Google — need to ship permission models that make this class of problem architecturally impossible, not just policy-prohibited. Until then, developers are the last line of defense, and network monitoring is your best friend.

Hacker News 247 pts 95 comments

Vercel Claude Code plugin wants to read your prompt

→ read on Hacker News
embedding-shape · Hacker News

> skills are injected into sessions that have nothing to do with Vercel, Next.js, or this plugin's scope> every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope> For users working across multiple projec

abelsm · Hacker News

The breach of trust here, which is hard to imagine isn't intentional, is enough reason alone to stop using Vercel, and uninstall the plugin. That part is easy. Most of these agents can help you migrate if anything.The question is on whether these platforms are going to enforce their policies fo

btown · Hacker News

To be sure, the problem isn't that the plugin injects behavior into the system prompt - that's every plugin and skill, ever.But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:> That middle row. Every bash command

guessmyname · Hacker News

I use Little Snitch and so far I have only seen Claude Code connect to api.anthropic.com and Sentry for telemetry. I have not seen any Vercel connections, but I always turn off telemetry in everything before I run it. If you log in with OAuth2, it also connects to platform.claude.com . For auto upda

an0malous · Hacker News

That whole company is built on sketchy practices

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.