Claude Code Routines Turn Your AI Assistant Into a Background Daemon

5 min read 1 source explainer
├── "Routines represent a fundamental architectural shift — AI coding moves from interactive tool to background daemon"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues the most significant aspect isn't any single feature but the paradigm change: Claude Code is no longer a tool you use but a process that runs. The Unix daemon metaphor captures how AI coding assistants are evolving from interactive sessions to autonomous background processes that don't require human initiation.

├── "The real bottleneck being solved is human prompting availability, not model capability"
│  └── top10.dev editorial (top10.dev) → read below

The editorial contends that the limiting factor in AI-assisted development has quietly shifted from 'can the model write good code' to 'who's around to prompt it.' Routines attack the initiation side of that equation by fully automating when and why Claude runs, while keeping humans in the loop only for the review/approval stage via PRs.

└── "Running autonomous agents with full shell access and no approval gates raises significant safety concerns"
  └── top10.dev editorial (top10.dev) → read below

The editorial highlights that each routine runs as a full cloud session with shell access, no permission prompts, and no approval gates — it can clone repos, execute commands, call external services, and push branches autonomously. This design trades safety guardrails for automation throughput, leaving only post-hoc review via session URLs and PR approvals as safeguards.

What happened

Anthropic shipped Claude Code Routines, a feature that turns Claude Code from an interactive coding assistant into an autonomous background agent. A routine is a saved configuration — a prompt, one or more GitHub repositories, an execution environment, and a set of MCP connectors — that runs on Anthropic-managed cloud infrastructure without any human in the loop.

Routines support three trigger types: scheduled (hourly, daily, weekdays, weekly, or custom cron), API (HTTP POST to a per-routine endpoint with bearer token auth), and GitHub events (pull requests, pushes, issues, workflow runs, and 15+ other webhook categories). A single routine can combine all three. The feature is available on Pro, Max, Team, and Enterprise plans, managed at claude.ai/code/routines or via the `/schedule` CLI command.

Each routine runs as a full Claude Code cloud session with shell access, no permission prompts, and no approval gates. It clones your repos, executes commands, calls external services through MCP connectors, and can push branches and open PRs. When it finishes, you get a session URL to review what it did.

Why it matters

### The daemon pattern comes to AI coding

The most significant thing about Routines isn't any individual feature — it's the architectural shift. Until now, AI coding assistants have been interactive: you open a session, type a prompt, watch it work, approve or reject. Routines break that model entirely. This is Anthropic positioning Claude Code not as a tool you use, but as a process that runs. The Unix metaphor is apt: Claude Code just grew a daemon mode.

This matters because the bottleneck in AI-assisted development has quietly shifted. The limiting factor is no longer "can the model write good code" — it's "who's around to prompt it and review the output." Routines attack the prompting side of that equation. The review side remains human (you still approve PRs), but the initiation is now fully automated.

### GitHub-native event handling is the real power

The scheduled and API triggers are useful but unsurprising — cron jobs and webhooks are table stakes. The GitHub trigger support is where Routines get genuinely interesting. The supported event matrix is extensive: pull requests (with filters for author, labels, base branch, draft status, fork origin), pushes, releases, issues, discussions, check runs, workflow completions, merge queue entries, and more.

The filter system for pull requests alone — author, title, body, base branch, head branch, labels, draft status, merge status, fork origin — suggests Anthropic studied how teams actually triage PRs and built the routing layer to match. You can set up a routine that only fires on non-draft PRs from forks targeting `main` with the `needs-review` label. That's not a toy demo; that's a real open-source maintainer workflow.

### The trust boundary is explicit

Anthropic made some deliberate security choices. By default, routines can only push to `claude/`-prefixed branches — a guardrail that prevents an autonomous agent from force-pushing to `main` at 3 AM. You can unlock unrestricted branch access per repo, but the default is conservative. Routines inherit your GitHub identity, so commits and PRs show your name. This is honest (no hiding behind a bot account) but also means you're accountable for what your daemon does.

The MCP connector model means routines can read Slack channels, create Linear tickets, and post to external services — all using your linked accounts. The documentation explicitly advises scoping access to what the routine needs. The implicit message: Anthropic expects teams to point these at production-adjacent systems, and they want the blast radius to be a conscious choice, not a default.

### How it compares to the alternatives

GitHub Copilot has its own agent mode and Copilot Workspace. Cursor has background agents. Devin pitches itself as a fully autonomous software engineer. But Routines occupy a different niche: they're not trying to be a standalone engineer. They're a headless Claude Code session with a trigger attached. The prompt is yours. The repos are yours. The connectors are yours. It's closer to writing a cron job than hiring a junior dev.

This is actually a strength. The failure mode of autonomous coding agents is usually scope creep — the agent tries to do too much, makes cascading bad decisions, and you spend more time cleaning up than you saved. Routines constrain scope by design: one prompt, specific repos, defined triggers, branch-prefixed output. The constraint is the feature.

What this means for your stack

### Immediate use cases worth trying

The documentation suggests several patterns, and some are more compelling than others:

- PR review with custom checklists: High signal. Every team has review standards that humans forget to apply consistently. A routine that checks security patterns, performance anti-patterns, and style compliance on every PR — then leaves inline comments — is genuinely useful today. The key is writing a prompt that's specific enough to avoid false positives.

- Docs drift detection: Medium signal. A weekly routine that scans merged PRs, flags stale docs, and opens update PRs is the kind of thing everyone wants but nobody builds. The value scales with codebase size.

- Alert triage to draft PR: High signal but high risk. Having a routine that reads a Sentry alert, correlates it with recent commits, and opens a fix PR is compelling. But autonomous code changes triggered by production errors need serious guardrails. Start with diagnosis-only (post analysis to Slack) before enabling fix-generation.

- Cross-repo library porting: Niche but powerful. If you maintain SDKs in multiple languages, a routine that watches merges in one repo and opens matching PRs in another could save significant toil.

### What to watch out for

Routines are in research preview. The documentation is upfront: "Behavior, limits, and the API surface may change." The API trigger ships under a dated beta header (`experimental-cc-routine-2026-04-01`), and Anthropic commits to supporting the two most recent header versions during transitions. That's reasonable, but if you're wiring routines into production CI/CD, build in fallback paths.

Daily run caps exist but aren't published in the documentation — you see your limits in the dashboard. For teams evaluating this for high-volume use (every PR across a large org), the economics need checking. Runs count against subscription usage, and Enterprise plans with extra usage can run on metered overage.

Looking ahead

Routines represent Anthropic's bet that the next phase of AI coding tools isn't smarter autocomplete — it's unattended execution. The feature set is deliberately conservative for a v1: no session reuse across GitHub events, no routine sharing between teammates, no complex branching logic within a routine. But the foundation — cloud-hosted autonomous sessions with trigger-based activation and scoped permissions — is the kind of infrastructure that compounds. If Anthropic nails reliability and teams build trust through low-risk routines (reviews, triage, docs), the high-value patterns (autonomous fixes, cross-repo migrations, continuous refactoring) become a matter of expanding the prompt, not rebuilding the system.

Hacker News 690 pts 390 comments

Claude Code Routines

→ read on Hacker News
joshstrange · Hacker News

LLMs and LLM providers are massive black boxes. I get a lot of value from them and so I can put up with that to a certain extent, but these new "products"/features that Anthropic are shipping are very unappealing to me. Not because I can't see a use-case for them, but because I h

andai · Hacker News

I'm a little confused on the ToS here. From what I gathered, running `claude -p <prompt>` on cron is fine, but putting it in my Telegram bot is a ToS violation (unless I use per-token billing) because it's a 3rd party harness, right? (`claude -p` being a trivial workaround for the &q

MyUltiDev · Hacker News

The trigger matrix here is actually the most interesting part. Schedule plus API plus GitHub event on the same routine unlocks some nice patterns, and the /fire endpoint returning a session URL means you can wire this into alerting tools or a CD pipeline from almost anywhere. The part that is n

comboy · Hacker News

Unrelated, but Claude was performing so tragically last few days, maybe week(s), but days mostly, that I had to reluctantly switch. Reluctantly because I enjoy it. Even the most basic stuff, like most python scripts it has to rerun because of some syntax error.The new reality of coding took away one

Eldodi · Hacker News

Anthropic is really good at releasing features that are almost the same but not exactly the same as other features they released the week before

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.