Reco.ai argues that JSONata's interpretation overhead — parsing expressions, walking ASTs, and dynamic type resolution on every evaluation — compounded into enormous infrastructure costs when processing millions of security events. By replacing these interpreted expressions with AI-generated custom code, they eliminated that overhead and saved approximately $500,000 per year in cloud spend.
Reco.ai highlights that the rewrite took only about a day of engineering time, enabled by AI code generation. They position this as a demonstration that AI excels at automating mechanical translation tasks — converting well-defined JSONata expressions into equivalent native code — where the input and output semantics are clear and testable.
The editorial contextualizes JSONata as a powerful and widely-used tool whose performance trade-off — runtime interpretation for expressiveness — is invisible for most applications processing a few hundred documents per request. The problem only emerges at Reco.ai's scale of millions of events per day, where marginal CPU costs compound into six-figure bills, making this a cautionary tale about architecture choices at scale rather than an indictment of JSONata itself.
Reco.ai, a SaaS security company that processes large volumes of JSON event data, published a postmortem on replacing JSONata — a popular JSON query and transformation language — with custom-built code generated primarily by AI. The project took roughly a day of engineering time, and the company reports saving approximately $500,000 per year in infrastructure costs as a result.
JSONata is a powerful, expressive query language for JSON. Think of it as XPath for JSON, but with richer transformation capabilities — you can filter, map, reduce, and reshape nested JSON structures using concise expressions. It's used widely in Node-RED, IBM's integration tools, and countless backend pipelines where JSON needs to be sliced and transformed at runtime.
The core problem: JSONata is an interpreted expression engine, and at Reco.ai's scale — processing millions of security events — the interpretation overhead was eating their compute budget alive. Every evaluation parses the expression, walks an AST, and performs dynamic type resolution. That's elegant for flexibility. It's expensive when you're running the same expressions millions of times per hour.
This story lands at the intersection of two trends that matter to every backend team: the rising pressure to optimize cloud spend, and the emerging practice of using AI to automate tedious-but-well-scoped rewrites.
The performance economics of interpreted DSLs. JSONata's design trades runtime performance for expressiveness and ease of use. For most applications — transforming a few hundred documents per request — that trade-off is invisible. But Reco.ai sits in the security event processing space, where data volumes are measured in millions of events per day. At that scale, the marginal CPU cost of interpreting expressions instead of executing compiled transformations compounds into six-figure annual cloud bills. This is a well-understood pattern: ORMs, template engines, rule engines, and expression evaluators all hit the same wall when you push enough volume through them.
The $500k figure is striking but not implausible. A security analytics platform processing high-volume JSON streams likely runs dozens of compute instances. If JSONata evaluation consumed, say, 30-40% of CPU across those instances, replacing it with direct code that runs 10-50x faster could easily halve the fleet. At $40-80k/year per instance for always-on compute, the math adds up.
AI as a rewrite accelerator, not a rewrite engine. The "we did it in a day with AI" framing is the headline, but the more interesting detail is what made this feasible at all. Reco.ai didn't ask an LLM to design a query engine from scratch. They had a finite set of JSONata expressions used in their pipeline — likely dozens to low hundreds — and they needed each one converted to equivalent native code. That's a translation task with clear inputs, clear expected outputs, and an existing test suite to validate against — exactly the kind of well-scoped problem where current AI coding tools genuinely deliver.
This is an important distinction. The AI didn't architect a replacement system. It automated the mechanical work of translating known expressions into equivalent imperative code. A senior engineer could have done the same translations manually; it would have taken weeks instead of a day. The AI compressed the tedious middle — the part where you stare at `$map($filter(events, function($e) { $e.severity > 3 }), function($e) { $e.name })` and write the equivalent `for` loop for the 47th time.
The Hacker News skepticism is warranted — and also beside the point. Community reactions to posts like this tend to split into two camps. The first says "this is just vendor marketing dressed up as a tech post." The second says "you replaced a battle-tested library with AI-generated code? Good luck maintaining that." Both concerns are valid. But the underlying technical insight — that interpreted DSLs are often the biggest hidden cost in high-throughput pipelines — stands regardless of how you feel about the messenger.
The maintenance question is the real long-term risk. JSONata is a community-maintained library with years of edge-case handling, Unicode support, error reporting, and regression tests. A bespoke replacement inherits none of that. If Reco.ai's JSON schemas change, or they need new transformation patterns, they're now maintaining custom code instead of updating a version pin. The $500k/year savings needs to be weighed against the ongoing cost of that maintenance burden.
Profile before you rewrite. The actionable lesson here isn't "rewrite your dependencies with AI." It's "profile your hot paths, and check whether a general-purpose library is the bottleneck." Most teams have never profiled their JSON transformation layer, their ORM query generation, or their template rendering. If you're running at scale and your cloud bill is growing faster than your traffic, there's a good chance a general-purpose library is eating cycles you don't need to spend.
The pattern is: compile what you interpret. If you have a fixed set of expressions, rules, or templates that run millions of times, generating static code from them — whether by hand, by codegen script, or by AI — will almost always be faster than interpreting them at runtime. This applies to JSONata, JMESPath, JSONPath, Jinja templates, ORM query builders, and rule engines. The technique predates AI by decades (it's basically partial evaluation), but AI makes the mechanical translation step fast enough to be worth doing for mid-priority optimizations that wouldn't have justified weeks of manual effort.
Validate ruthlessly. If you do use AI to generate replacement code, the test suite is non-negotiable. Generate the code, then run every existing expression through both the old library and the new code, comparing outputs on real production data. Property-based testing is your friend here — generate random JSON inputs and verify that the old and new implementations agree. The "one day" timeline only works if you already have comprehensive test coverage for the behavior you're replacing.
Expect to see more of these "we replaced library X with AI-generated code" stories. The pattern works best for well-scoped, high-volume, performance-sensitive code paths where a general-purpose library's flexibility is overkill for your actual usage. The risk is that teams over-index on the "AI wrote it in a day" narrative and under-invest in the validation and maintenance work that makes these rewrites safe. The $500k headline is real. The question every team should ask is whether their replacement will still be correct — and maintainable — in year two.
Some background on one of the other two golang implementations mentioned in the comments.Years ago I hired an Upwork contractor to port v1.5.3 to golang as best he could. He did a great job and it served us well, however it was far, far from perfect and it couldn't pass most of the JS test suit
> At Reco, we have a policy engine that evaluates JSONata expressions against every message in our data pipeline - billions of events, on thousands of distinct expressions.The original architecture choice and price almost gave me a brain aneurysm, but the "build it with AI" solution is
>The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.the first question that comes to mind is: who takes care of this now?You had a dependency with an open source project. now your translated
The headline seems to be flashy indeed, but ai didn't really solve this imo.They just seemed to fix their technology choices and got the benefits.There's existing golang versions of jsonata, so this could have been achieved with those libraries too in theory. There's nothing written a
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our