Apple Left Their AI Prompt Instructions in a Production App

3 min read 1 source clear_take
├── "This reveals Apple's AI development stack is more Anthropic-dependent than publicly advertised"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues that Apple Intelligence has been marketed primarily around on-device models and OpenAI's ChatGPT integration for Siri, but finding CLAUDE.md files in a shipping consumer app shows Anthropic's tools are woven into Apple's actual development workflow. This suggests Claude isn't just an API call but an integrated coding assistant Apple developers interact with daily.

├── "The CLAUDE.md files specifically prove Claude Code adoption, not just generic AI usage"
│  └── top10.dev editorial (top10.dev) → read below

The editorial emphasizes that CLAUDE.md is a convention specific to Claude Code, Anthropic's agentic coding tool — not generic prompts you'd write for any LLM. These are structured instruction files that persist across sessions, giving Claude context about project architecture and conventions, which indicates hands-on agentic coding tool integration rather than one-off prompt experimentation.

├── "This is a build pipeline hygiene failure — dev artifacts should never ship in production bundles"
│  └── top10.dev editorial (top10.dev) → read below

The editorial notes that finding CLAUDE.md in a shipping app means nobody caught the file in the build pipeline, framing it as a process failure. The files were visible to anyone who decompiled or inspected the app's contents, suggesting Apple's build and review process lacks filtering for AI development artifacts.

└── "This is part of a broader industry pattern of AI coding tools leaving traces in production software"
  ├── top10.dev editorial (top10.dev) → read below

The editorial contextualizes the discovery by noting it 'mirrors a pattern we've seen across the industry,' referencing GitHub's own engineering practices as a parallel. The argument is that as AI coding assistants become standard developer tooling, accidental leakage of prompt files and AI artifacts into production will become an increasingly common class of oversight.

  └── Aaron P. (@aaronp613) (Twitter/X) → read

As the security researcher who discovered and publicly disclosed the CLAUDE.md files in the Apple Support app binary, Aaron P. surfaced the finding for community scrutiny. His decompilation work drew 342 points on Hacker News, demonstrating significant community interest in how AI development artifacts are making their way into production software.

What happened

Security researcher Aaron P. (@aaronp613) discovered that Apple's official Apple Support app shipped with CLAUDE.md files still included in the production bundle. These files — the convention used to give Claude AI persistent project-level instructions — were left in the app binary, visible to anyone who decompiled or inspected the app's contents.

The discovery, posted on Twitter/X, quickly hit Hacker News where it scored 342 points. The files confirm that Apple's development teams are using Anthropic's Claude with structured prompt instructions as part of their software development workflow — not just for internal experiments, but in the pipeline that produces shipping consumer apps.

While Apple hasn't publicly commented on the finding, the presence of CLAUDE.md files (as opposed to generic prompt text) specifically indicates usage of Claude Code or a similar Anthropic tool that reads project-level markdown instructions to guide AI behavior during development.

Why it matters

Apple's AI stack is more Anthropic-dependent than advertised. Apple Intelligence has been marketed as a blend of on-device models and partnerships (primarily OpenAI for Siri's ChatGPT integration). Finding Claude instruction files in a production app suggests Anthropic's tools are woven into Apple's actual development workflow — not just as an API call, but as an integrated coding assistant that developers interact with daily.

The CLAUDE.md convention is specific to Claude Code, Anthropic's agentic coding tool. These aren't generic prompts you'd write for any LLM. They're structured instruction files that persist across sessions, giving Claude context about the project's architecture, conventions, and constraints. Finding one in a shipping app means a developer (or team) was using Claude Code to build or maintain the Apple Support app, and nobody caught the file in the build pipeline.

This mirrors a pattern we've seen across the industry. GitHub's own engineers have been caught with Copilot artifacts in commits. Amazon's internal tools have leaked prompt structures. What's notable here isn't that Apple uses AI — it's that the tooling has become so normalized that prompt files are treated like any other config file, making them easy to accidentally ship.

The Hacker News discussion split predictably: one camp argued this is a non-event ("of course Apple uses AI tools"), while another noted the operational security implications. If prompt instructions contain architectural details, internal API names, or security constraints, shipping them in a public app creates an information disclosure risk that goes beyond mere embarrassment.

What this means for your stack

Your build pipeline needs a prompt file exclusion rule. If you're using Claude Code, Cursor, or any tool that reads local instruction files, those files will end up in version control. From there, they're one misconfigured `.gitignore` or build step away from shipping to users.

Here's the minimum hygiene checklist:

- Add `CLAUDE.md`, `.cursorrules`, `.github/copilot-instructions.md`, and similar files to your production build exclusion list (not just `.gitignore` — your actual packaging step) - Treat prompt instruction files like `.env` files: they contain information about your system's architecture and constraints that you don't want in production artifacts - If you're on a team larger than 5 engineers using AI coding tools, audit your last 3 releases for accidentally included prompt or instruction files - Consider a CI check that fails the build if known AI instruction file patterns appear in the output artifact

The irony is thick: CLAUDE.md files exist to make AI assistants more effective by giving them project context. That same context — architectural decisions, API patterns, security boundaries — is exactly what you don't want leaking. Apple's slip-up is a reminder that the boundary between "development tooling" and "production artifact" has gotten dangerously thin in the age of AI-assisted coding.

Looking ahead

Expect this class of leak to become more common before it becomes less common. AI coding tools are proliferating faster than teams can update their build hygiene. The fix is straightforward (exclude patterns in build configs), but the cultural shift — treating prompt files as sensitive configuration rather than casual documentation — will take longer to land. Apple will quietly patch this in the next app update, but the signal is clear: even the most security-conscious organizations are still catching up to the operational realities of AI-assisted development.

Hacker News 342 pts 276 comments

Apple accidentally left Claude.md files Apple Support app

→ read on Hacker News
internet2000 · Hacker News

> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.--Mark Gurman, Bloomberg https://x.com&#

ramon156 · Hacker News

Unrelated:Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?

ryandrake · Hacker News

I wouldn't even think that CLAUDE.md would make it into source control, let alone into the product. I don't AI-code for a living, so I don't know what is considered best practices, but I would think that CLAUDE.md, AGENTS.md, REQUIREMETNS.md, MY_PLAN.md, THIS_STUFF.md, THAT_THING.md,

zombot · Hacker News

If that's the backdoor that gets Apple to accidentally write documentation again, I'm all for it.

suyavuz · Hacker News

People become so lazy after ai. Even they don't check what they commit.

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.