Claude Code Routines: Anthropic Wants Your AI to Work the Night Shift

4 min read 1 source explainer
├── "Routines represent a meaningful shift because they are infrastructure-native, not IDE-native"
│  └── top10.dev editorial (top10.dev) → read below

The editorial argues that the highest-value automation — nightly backlog grooming, deploy verification, alert triage — needs to happen when your laptop is closed. By running on Anthropic's cloud rather than in a local IDE, Routines solve the fundamental limitation that has held back autonomous coding agents from prior attempts by GitHub Copilot Workspace, Cursor, and Devin.

├── "The competitive landscape is intensifying — Anthropic is staking its claim in a crowded autonomous agent market"
│  └── top10.dev editorial (top10.dev) → read below

The editorial frames Routines against two years of industry efforts toward autonomous coding agents, citing GitHub Copilot Workspace, Cursor's background agents, Devin, and numerous startups all competing for the same slot. Anthropic's entry is notable because it bundles triggers (cron, API, GitHub events), MCP connectors, and full shell access into a single managed offering rather than requiring users to stitch together separate tools.

└── "Running autonomous agents with no approval prompts under your identity raises significant trust and security concerns"
  └── top10.dev editorial (top10.dev) → read below

The editorial highlights that each routine runs as a full Claude Code cloud session with shell commands and all attached connectors available, with no approval prompts during execution. Everything the routine does — commits, PRs, Slack messages — appears under the user's identity, raising questions about accountability, auditability, and the blast radius of autonomous actions gone wrong.

What happened

Anthropic shipped Claude Code Routines — a new capability that turns Claude Code from an interactive pair-programming tool into an autonomous agent that runs on Anthropic-managed infrastructure. The feature, currently in research preview, lets you define a prompt, attach one or more GitHub repositories, wire up MCP connectors (Slack, Linear, Google Drive, etc.), and set triggers to run the whole thing unattended.

Three trigger types are supported: scheduled (hourly, daily, weekdays, weekly, or custom cron), API (an HTTP POST endpoint with bearer token auth, so you can fire routines from deploy pipelines or alerting systems), and GitHub events (pull requests and releases, with filters on author, branch, labels, draft status, and more). A single routine can combine all three. You create routines from the web UI at claude.ai/code/routines, via `/schedule` in the CLI, or from the Desktop app.

Each routine runs as a full Claude Code cloud session — shell commands, committed skills, and all attached connectors are available, with no approval prompts during execution. The session clones your repos, starts from the default branch, and pushes to `claude/`-prefixed branches by default (unrestricted branch pushing is opt-in per repo). Everything the routine does — commits, PRs, Slack messages — appears under your identity.

Why it matters

The developer tooling market has been circling this idea for two years: what if AI agents didn't need a human sitting in the loop? GitHub Copilot Workspace, Cursor's background agents, Devin, and a dozen startups have been angling for the "autonomous coding agent" slot. Anthropic's entry is notable for three reasons.

First, it's infrastructure-native, not IDE-native. Routines don't run on your machine. They run on Anthropic's cloud. This matters because the highest-value automation — nightly backlog grooming, deploy verification, alert triage — needs to happen when your laptop is closed. By hosting the runtime themselves, Anthropic sidesteps the fundamental limitation of every IDE-based agent: it only works when you're working.

Second, the trigger model is genuinely composable. The same routine can fire on a GitHub PR, on a schedule, and from a curl command in your CD pipeline. This isn't a toy demo — it's the shape of real operational automation. Consider the alert triage example from Anthropic's docs: monitoring fires, hits the API endpoint with a stack trace, the routine correlates it with recent commits and opens a draft PR with a fix. That's a Sentry-to-PR pipeline with zero glue code. Whether it works reliably at 3am is another question, but the design is right.

Third, the connector model (MCP) makes this extensible without Anthropic building every integration. Routines inherit whatever MCP connectors you've set up — Slack, Linear, Jira, Google Drive. The routine doesn't just read and write code; it can post summaries, create tickets, update docs. This is the difference between "AI that writes code" and "AI that participates in your operational workflow."

The HN discussion (565 points) reflects the split you'd expect. Enthusiasts see this as the logical next step — finally, the cron job that understands context. Skeptics raise legitimate concerns: routines run under your identity, so a hallucinated commit or a bad Slack message is *your* hallucinated commit. The `claude/`-prefixed branch restriction is a sensible default, but the opt-in unrestricted mode means one misconfigured routine can push to main.

What this means for your stack

If you're evaluating this, start with the low-risk, high-signal use cases: documentation drift detection (weekly scan of merged PRs against docs, open update PRs), PR review augmentation (apply your team's checklist on every opened PR, leave inline comments), and deploy smoke tests (fire the routine from your CD pipeline, let it scan error logs). These are tasks where a false positive is a minor annoyance, not a production incident.

The riskier use cases — alert triage that opens PRs, library porting across repos, backlog management that assigns owners — require high trust in the model's judgment and thorough prompt engineering. Treat the routine's prompt like you'd treat a runbook for a new on-call engineer: explicit success criteria, explicit boundaries, explicit escalation paths ("if you're not sure, create a draft PR and tag @human-reviewer").

Practical constraints to know: the minimum cron interval is one hour. Daily run caps exist (tied to your subscription tier, visible in settings). GitHub triggers currently support pull requests and releases only — no issue events, no push-to-branch, no workflow dispatch. Sessions are independent per event, so two PR updates produce two separate sessions with no shared state. And the whole feature is in research preview, meaning the API surface, rate limits, and billing model can change.

Billing is worth watching. Routines draw from your existing subscription usage. A Max plan user running 20 routines that each trigger 5 times daily is burning 100 sessions per day on background automation. Anthropic offers metered overage for organizations, but individual developers on Pro plans may hit caps quickly if they're aggressive with scheduling.

Looking ahead

Routines represent Anthropic's bet that the next phase of AI coding tools isn't about making the IDE smarter — it's about making the agent autonomous. The competitive implication is clear: if your AI assistant only works when you're typing, it's a tool; if it works when you're sleeping, it's a team member. The research preview caveats are real (limited GitHub event types, daily caps, potential breaking changes behind dated beta headers), but the architecture — prompt + repos + triggers + connectors, running on managed infrastructure — is the right shape for where this market is heading. The question isn't whether autonomous coding agents will become standard. It's whether the trust model catches up to the capability model before someone's routine pushes a bad migration to production at 4am.

Hacker News 690 pts 390 comments

Claude Code Routines

→ read on Hacker News
joshstrange · Hacker News

LLMs and LLM providers are massive black boxes. I get a lot of value from them and so I can put up with that to a certain extent, but these new "products"/features that Anthropic are shipping are very unappealing to me. Not because I can't see a use-case for them, but because I h

andai · Hacker News

I'm a little confused on the ToS here. From what I gathered, running `claude -p <prompt>` on cron is fine, but putting it in my Telegram bot is a ToS violation (unless I use per-token billing) because it's a 3rd party harness, right? (`claude -p` being a trivial workaround for the &q

MyUltiDev · Hacker News

The trigger matrix here is actually the most interesting part. Schedule plus API plus GitHub event on the same routine unlocks some nice patterns, and the /fire endpoint returning a session URL means you can wire this into alerting tools or a CD pipeline from almost anywhere. The part that is n

comboy · Hacker News

Unrelated, but Claude was performing so tragically last few days, maybe week(s), but days mostly, that I had to reluctantly switch. Reluctantly because I enjoy it. Even the most basic stuff, like most python scripts it has to rerun because of some syntax error.The new reality of coding took away one

Eldodi · Hacker News

Anthropic is really good at releasing features that are almost the same but not exactly the same as other features they released the week before

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.