Hanov runs multiple $10K MRR SaaS products on a single ~$20/month VPS with no container orchestration, managed databases, or multi-region failover. His argument is that the entire operational surface area fitting in one person's head is the point — simplicity reduces cognitive load and maximizes time spent on features rather than infrastructure firefighting.
By surfacing Hanov's post (849 points, 468 comments), the HN community validated the argument that a $20 VPS you fully understand has a lower total lifetime cost than a $200/month AWS setup requiring periodic firefighting, surprise bills from forgotten resources, and hours lost to IAM policy debugging. Each additional managed service adds billing dimensions, API surfaces that can break, and configuration drift.
The editorial acknowledges that for companies with millions of users and five-nines SLA requirements, auto-scaling groups, multi-AZ deployments, managed databases with automated failover, and dedicated SRE teams are genuinely necessary. The minimal approach works specifically for solo founders with hundreds of paying customers — not as a universal prescription.
Steve Hanov published a detailed breakdown of his infrastructure for running multiple SaaS products, each generating over $10,000 in monthly recurring revenue, on a combined hosting bill of roughly $20 per month. The post hit 849 points on Hacker News, reigniting one of the oldest debates in software engineering: how much infrastructure do you actually need?
Hanov's stack is aggressively minimal. We're talking a single VPS — likely a Hetzner or OVH box — running everything. No container orchestration. No managed database services. No multi-region failover. No Terraform. No Kubernetes. No separate staging environment that costs more than production. The entire operational surface area fits in one person's head, which is precisely the point.
The products themselves are profitable enough that he could trivially afford a more complex setup. He chooses not to — and the reasoning is more rigorous than "I'm cheap."
The cloud infrastructure industry has spent a decade convincing developers that reliability requires complexity. The argument goes: you need auto-scaling groups, multi-AZ deployments, managed databases with automated failover, and a dedicated SRE team — or you're being negligent. For companies with millions of users and five-nines SLA requirements, this is correct.
For a solo founder running a $10K MRR product with hundreds of paying customers, every additional service is a liability, not an asset. Each managed service adds a billing dimension, an API surface that can break, a configuration drift that can bite you at 3 AM, and a cognitive load tax that slows feature development.
Hanov's implicit argument is economic: the real cost of infrastructure isn't the invoice — it's the time. A $20 VPS that you fully understand costs less total-lifetime-dollars than a $200/month AWS setup that requires periodic firefighting sessions, surprise bills from forgotten resources, and half a day lost to IAM policy debugging every quarter.
The Hacker News discussion exposed the predictable fault line. Engineers at large companies pointed out that a single server means a single point of failure, no disaster recovery, and potential data loss. Solo founders fired back with actual uptime numbers — many reporting 99.9%+ on single-box setups because modern hardware simply doesn't fail that often, and when it does, a restore-from-backup takes under an hour.
The uncomfortable truth: most SaaS products don't need five-nines availability because their customers wouldn't notice four-nines. If your app is down for 4 minutes per month and you fix it within an hour of any outage, no customer is churning over that. They churn over missing features, bad UX, and slow support — all things that suffer when you're maintaining infrastructure instead of shipping product.
This isn't a new observation. Pieter Levels famously ran Nomad List on a single PHP server. Basecamp (now 37signals) moved off the cloud entirely and published the receipts: $7M over five years returned to the bottom line. What makes Hanov's post resonate in 2026 is the scale multiplier — not one product, but multiple, all on the same $20 box.
If you're a solo founder or small team running a B2B SaaS product under $50K MRR, here's the honest calculation:
Do the napkin math on your actual requirements. How many concurrent users do you have at peak? For most early-stage B2B products, the answer is under 100. A $20 VPS with 4 cores and 8GB RAM can handle thousands of concurrent connections with any competent web framework. You are almost certainly not compute-constrained.
Count your operational dependencies. Every external service you depend on is a potential 3 AM wakeup. A Postgres database running on the same box with hourly backups to object storage is more reliable in practice than a managed database service, because you control the failure modes and the recovery playbook fits in your head. The caveat: you must actually do the backups, and you must test restores.
Identify your actual SLA requirement. If your contract doesn't specify uptime guarantees, your customers' revealed preference (measured by churn correlation with downtime) is your real SLA. For most products at this scale, it's far more forgiving than engineers assume.
The approach doesn't scale to every situation. If you're handling financial transactions, medical data, or serving latency-sensitive APIs to millions of users, this isn't your architecture. But if you're building B2B tools where correctness matters more than availability, the $20 stack is not reckless — it's disciplined.
The cloud repatriation trend that 37signals catalyzed in 2023 continues to gain legitimacy. What's shifted isn't the technology — VPS hosting has been viable forever — it's the cultural permission. Senior engineers are now comfortable saying "I run this on a single server" without the reflexive "but you should really..." from peers. As AI tools make it easier to build features fast, the bottleneck shifts even further from infrastructure to product judgment — and the $20 stack becomes more defensible, not less.
If this sounds like basic advice, consider there are a lot of people out there that believe they have to start with serverless, kubernetes, fleets of servers, planet-scale databases, multi-zone high-availability setups, and many other "best practices".Saying "you can just run things o
> I use Linode or DigitalOcean. Pay no more than $5 to $10 a month. 1GB of RAM sounds terrifying to modern web developers, but it is plenty if you know what you are doing.If you get one dedicated server for multiple separate projects, you can still keep the costs down but relax those constraints.
Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.One note: you can absolutely use Python or Node just as well as Go. There's Hetzner that offers 4GB RAM, 10TB network (then 1$/TB egress), 2CPUs machines for 5$.Two disclaimers for VPS:If you're using a d
There are zero reasons to limit yourself to 1GB of RAM. By paying $20 instead of $5 you can get at least 8gb of RAM. You can use it for caches or a database that supports concurrent writes. The $15 difference won’t make any financial difference if you are trying to run a small business.Thinking abou
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
> The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server.I don't want to diss SQLite because it