AI Broke the Social Contract Behind Vulnerability Disclosure

5 min read 1 source multiple_viewpoints
├── "AI breaks coordinated disclosure by flooding vendors with reports faster than they can fix them"
│  └── Jeff Kaufman (jefftk.com) → read

Kaufman argues that coordinated disclosure worked when vulnerability discovery was expensive and slow, creating a manageable inbound rate. AI tools have dropped the cost of finding vulnerabilities by orders of magnitude while the cost of fixing them remains unchanged, creating an unsustainable flooding problem for vendors.

├── "Full disclosure's pressure model loses its logic when AI makes independent rediscovery near-certain"
│  └── Jeff Kaufman (jefftk.com) → read

Kaufman contends that full disclosure historically justified immediate publication because secrecy benefited attackers who might have already found the same bug independently. When AI makes independent rediscovery trivially easy and near-certain, publishing immediately no longer provides a meaningful information advantage to defenders — attackers with AI tools will find the same bugs regardless.

└── "Both vulnerability cultures rest on a now-broken assumption that discovery requires expensive human expertise"
  └── Jeff Kaufman (jefftk.com) → read

Kaufman's central thesis is that coordinated and full disclosure, despite their disagreements, share a foundational assumption: that finding vulnerabilities is slow, costly, and demands specialized skill. AI has invalidated this premise entirely, meaning both cultures need fundamental rethinking rather than incremental adjustment.

What Happened

Jeff Kaufman — a senior engineer with deep roots in infrastructure and security thinking — published an essay arguing that AI is simultaneously undermining the two dominant cultures around software vulnerability handling. The piece landed on Hacker News with nearly 300 upvotes, hitting a nerve with practitioners who've watched AI tooling reshape the vulnerability landscape in real time.

The two cultures in question have coexisted (sometimes uneasily) for decades. Coordinated disclosure — the dominant professional norm — says you find a bug, report it privately to the vendor, and give them a window (typically 90 days, popularized by Google's Project Zero) to ship a fix before you go public. Full disclosure says you publish immediately, because vendors won't prioritize fixes without public pressure, and because secrecy benefits attackers who may have already found the same bug independently.

Both cultures share an implicit assumption: vulnerability discovery is expensive, slow, and requires specialized human expertise. That assumption is now wrong.

Why It Matters

### The Coordinated Disclosure Model Is Drowning

Coordinated disclosure worked when the inbound rate of vulnerability reports was manageable. A vendor might handle dozens or hundreds of reports per quarter from a relatively small community of security researchers with established relationships and reputations.

AI-assisted vulnerability discovery changes the math entirely. Tools like large language models can scan codebases, identify common vulnerability patterns (buffer overflows, injection flaws, authentication bypasses), and generate proof-of-concept exploits with minimal human guidance. The cost of finding a vulnerability has dropped by orders of magnitude, but the cost of fixing one hasn't changed at all.

This creates a flooding problem. When thousands of people can independently discover the same class of bugs using the same AI tools, the coordinated disclosure pipeline doesn't scale. Vendors can't maintain private relationships with an unbounded set of AI-equipped finders. The 90-day window assumes a world where the finder and the vendor are the only parties who know about the bug — but when discovery is cheap, that exclusivity evaporates.

Google's Project Zero team has already noted increasing overlap between their findings and independently discovered bugs. The coordination overhead grows while the information advantage of early private disclosure shrinks.

### Full Disclosure Becomes Weaponizable

The full disclosure camp argued that publishing vulnerabilities openly levels the playing field — defenders and attackers get the information simultaneously, and public pressure forces rapid patching. This was a reasonable position when writing a working exploit from a vulnerability description required significant skill and time.

AI collapses the gap between 'published vulnerability description' and 'working exploit in the wild' from days or weeks to hours or minutes. An LLM can read a CVE advisory, understand the vulnerable code pattern, generate exploit code, and even adapt it to specific target configurations. Full disclosure in an AI-powered world doesn't level the playing field — it tilts it dramatically toward offense.

The asymmetry is structural: generating an exploit is a simpler task than deploying a patch across a heterogeneous production environment. AI accelerates the former far more than the latter. Defenders still need to test patches, schedule maintenance windows, coordinate across teams, and handle the organizational friction of change management. Attackers just need working code.

### The AI/ML Community Has Its Own Problem

There's a parallel cultural collision happening within AI research itself. The machine learning community developed its own norms around "vulnerabilities" — jailbreaks, prompt injections, alignment failures — and those norms look nothing like traditional infosec. Researchers post jailbreak techniques on Twitter. Model exploits circulate on Reddit and Discord. There's no CVE system for prompt injection, no coordinated disclosure process for alignment bypasses.

This casual approach to AI vulnerabilities worked when the attack surface was limited to chatbots generating inappropriate text. As AI systems gain agency — executing code, making API calls, managing infrastructure — the consequences of model-level vulnerabilities start looking a lot more like traditional security vulnerabilities, but the disclosure culture hasn't caught up.

The infosec community looks at AI researchers sharing jailbreaks publicly and sees reckless full disclosure. The AI research community looks at 90-day embargoes and sees a system designed to protect vendor interests, not users. Neither side is entirely wrong.

What This Means for Your Stack

Assume faster exploitation timelines. Whatever your current assumptions about the window between public disclosure and active exploitation, cut them in half. Then cut them again. Automated patching, aggressive WAF rules, and runtime protection aren't optional hardening anymore — they're baseline.

Rethink your vulnerability intake process. If you maintain any kind of bug bounty or security reporting channel, prepare for volume. AI-generated reports are already showing up in bug bounty programs, and the quality varies wildly. You'll need automated triage, deduplication, and severity scoring before a human ever looks at a report. HackerOne and Bugcrowd have both acknowledged this shift.

Watch the AI-specific attack surface. If you're deploying LLM-powered features — and statistically, you probably are or soon will be — treat prompt injection and tool-use exploits with the same seriousness as SQL injection. The fact that the AI research community treats these casually doesn't mean your production system can. Apply traditional security engineering discipline: input validation, least-privilege tool access, output sanitization, and monitoring for anomalous behavior.

Invest in detection over prevention. In a world where zero-days are cheaper to find, the perimeter-defense model weakens further. Assume breach. Instrument your systems for rapid detection and containment rather than relying on patching before exploitation.

Looking Ahead

The vulnerability disclosure system that served the industry for 25 years was a social technology — it worked because a small community of researchers and vendors agreed on norms and mostly followed them. AI is dissolving the preconditions that made those norms functional: scarcity of discovery, the cost of exploitation, and the existence of a bounded community that could self-govern. What replaces it isn't clear yet, but the answer probably involves more automation on the defense side (AI-assisted patching, automated exploit detection), new institutional structures (mandatory disclosure timelines codified in regulation rather than social norms), and a painful convergence between the infosec and AI research communities' very different instincts about openness. The transition period — which we're in right now — is the dangerous part.

Hacker News 415 pts 167 comments

AI Is Breaking Two Vulnerability Cultures

→ read on Hacker News

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.