Vercel's Claude Code Plugin Reads Your Prompts. That's a Problem.

5 min read 1 source clear_take
├── "AI coding tool plugins reading prompt content is a serious privacy violation that differs fundamentally from normal telemetry"
│  └── Akshay Chugh (Personal Blog) → read

Chugh's investigation found that Vercel's Claude Code plugin transmits actual user prompt content through its telemetry pipeline, not just usage metrics or error rates. He provided network-level evidence showing the plugin ingests the full content developers type into Claude, including architecture details, debugging sessions, and unreleased product information.

├── "The consent-at-install permission model for AI tool plugins is structurally broken and needs redesign"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues this is not a Vercel-specific problem but a systemic flaw in how AI coding tool extensions handle permissions. The consent-at-install model grants broad access with zero ongoing visibility, and developers click through permission dialogs without scrutiny — creating a structural vulnerability that will worsen as the plugin ecosystem grows.

└── "Prompt content occupies a uniquely sensitive category that companies must treat differently from conventional telemetry"
  └── @Hacker News community (Hacker News, 247 pts)

The HN discussion (247 points, 95 comments) reacted swiftly and largely negatively — not because telemetry itself is unusual, but because prompt content contains proprietary code, architectural decisions, and security vulnerabilities. The community consensus distinguished this from standard click-stream or feature-flag telemetry, treating it as a fundamentally different sensitivity tier.

What Happened

Developer Akshay Chugh published a detailed investigation into the telemetry behavior of Vercel's Claude Code plugin, and the findings are uncomfortable. The plugin, which integrates Vercel's deployment and development tools into Claude Code's agentic workflow, was found to be reading user prompts — not just tracking feature usage or error rates, but ingesting the actual content developers type into their AI coding assistant.

The discovery came from inspecting the plugin's behavior at the network and permission level. When installed, the plugin requests access to conversation context — a permission scope that sounds innocuous but functionally means the plugin can see everything you ask Claude to do: refactoring requests that reveal your architecture, debugging sessions that expose your stack's weaknesses, and planning conversations that contain unreleased product details.

The blog post, which hit Hacker News with a score exceeding 247 points, included evidence of the data being transmitted as part of Vercel's telemetry pipeline. The community response was swift and largely negative — not because telemetry itself is unusual, but because prompt content sits in a fundamentally different sensitivity category than click-stream data or feature flags.

Why It Matters

This is not a Vercel-specific problem. It's a structural issue with how AI coding tool plugins handle permissions, and it's going to get worse before it gets better.

Claude Code's plugin architecture, like most AI tool extension systems, operates on a consent-at-install model — you approve a set of permissions when you add the plugin, and from that point forward, the plugin operates within those bounds with zero ongoing visibility. Most developers click through these permission dialogs the same way they accept cookie banners: quickly, without reading, because the friction cost of scrutinizing every plugin outweighs the perceived risk.

But the risk calculus for AI coding plugins is materially different from browser extensions or IDE themes. Your prompts to an AI coding assistant are, in aggregate, a remarkably complete map of your codebase, your technical decisions, and your development priorities. A single prompt might contain a database schema. A thread might walk through your authentication flow. A planning session might outline features your competitors don't know about yet.

Vercel's position is likely that this data collection improves their product — understanding how developers use the plugin, what workflows they're building, and where the integration fails. That's a legitimate engineering goal. But the gap between "we collect anonymous usage metrics" and "we read your prompts" is the gap between knowing someone visited a hospital and knowing their diagnosis. The sensitivity of the data demands a different consent model.

The Hacker News discussion surfaced a recurring theme: developers who had assumed that "telemetry" in AI tools meant the same thing it means in traditional software — event counts, latency percentiles, feature adoption rates. The realization that prompt content could be included in that umbrella was a genuine surprise to many, including experienced engineers who should have known better.

This is partly a vocabulary problem. The industry has trained developers to think of "telemetry" as low-sensitivity operational data. AI tool vendors are using the same word to describe a fundamentally different kind of data collection. Whether that's intentional misdirection or just sloppy terminology, the effect is the same: developers are consenting to something they don't fully understand.

The Plugin Permission Model Is Broken

The deeper issue is that AI coding tool ecosystems haven't developed the permission granularity that the threat model requires. Compare this to mobile app permissions, which evolved from "this app needs access to everything" to fine-grained, runtime-prompted, revocable capabilities. AI coding plugins are still in the "access to everything" era.

A well-designed permission model for Claude Code plugins would distinguish between:

- Tool invocation context — knowing *that* a plugin was used and which tool was called - Input/output content — the actual prompts and responses flowing through the plugin - Codebase access — what files and directories the plugin can read - Network transmission — what data leaves the local machine and where it goes

Right now, most AI coding tool plugin systems collapse these into one or two broad permission scopes. A plugin that needs to call `vercel deploy` shouldn't need to read your conversation about why you're restructuring your microservices architecture. But the current permission model doesn't make that distinction.

This is a tractable engineering problem. Anthropic, and every other company building AI coding tool platforms, needs to implement least-privilege permission models before the plugin ecosystem scales further. The alternative is a landscape where every plugin is a potential data exfiltration vector, and the only defense is manually auditing source code — which doesn't scale and which most developers won't do.

What This Means for Your Stack

If you're using Claude Code (or any AI coding assistant) with third-party plugins in a professional context, here's what to do now:

Audit your installed plugins. Check what permissions each plugin has requested and whether those permissions are proportional to the plugin's functionality. If a deployment plugin has access to your full conversation context, that's a red flag.

Assume your prompts are not private. Until AI coding tool platforms implement robust, granular permission models with clear data handling policies, treat every prompt you type as potentially visible to every installed plugin's vendor. This means: don't paste credentials into prompts (use environment variables and reference them), be cautious about discussing unreleased features in detail, and consider using separate Claude Code configurations for sensitive vs. routine work.

Watch for opt-out telemetry settings. Many plugins and tools bury telemetry opt-outs in configuration files or environment variables. Vercel's own CLI, for example, has a `VERCEL_TELEMETRY_DISABLED` flag. Check whether equivalent settings exist for the Claude Code plugin and enable them.

For teams working on proprietary codebases, the safest posture right now is to maintain an allowlist of approved plugins and treat any plugin that requests conversation-level access as requiring security team review. This adds friction, but the alternative — uncontrolled exfiltration of your development conversations to third-party servers — is worse.

Looking Ahead

This incident is the canary for a much larger challenge. As AI coding assistants become the primary interface for software development, the plugins that extend them will become as critical — and as dangerous — as the npm packages in your dependency tree. We spent a decade learning (painfully) that `npm install` is a trust decision. We're about to learn the same lesson about `claude plugin install`, and the stakes are higher because the data surface is richer. The companies building these platforms have a narrow window to get the permission model right before a serious breach forces the conversation.

Hacker News 247 pts 95 comments

Vercel Claude Code plugin wants to read your prompt

→ read on Hacker News
embedding-shape · Hacker News

> skills are injected into sessions that have nothing to do with Vercel, Next.js, or this plugin's scope> every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope> For users working across multiple projec

abelsm · Hacker News

The breach of trust here, which is hard to imagine isn't intentional, is enough reason alone to stop using Vercel, and uninstall the plugin. That part is easy. Most of these agents can help you migrate if anything.The question is on whether these platforms are going to enforce their policies fo

btown · Hacker News

To be sure, the problem isn't that the plugin injects behavior into the system prompt - that's every plugin and skill, ever.But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:> That middle row. Every bash command

guessmyname · Hacker News

I use Little Snitch and so far I have only seen Claude Code connect to api.anthropic.com and Sentry for telemetry. I have not seen any Vercel connections, but I always turn off telemetry in everything before I run it. If you log in with OAuth2, it also connects to platform.claude.com . For auto upda

an0malous · Hacker News

That whole company is built on sketchy practices

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.