Argues that cloud vendor lock-in, planned obsolescence, and software bloat all create artificial pressure to replace hardware long before its useful life ends. The post contends that most developers and small teams are migrating to cloud or upgrading hardware far earlier than necessary, driven by industry incentive structures rather than actual need.
Points to concrete cost comparisons: a used 64GB RAM server from 2019 costs $200-400 on eBay while the equivalent cloud instance runs $150-300/month, meaning the hardware pays for itself in weeks. Argues that the era of cheap cloud compute is over as AWS, Azure, and GCP have all raised prices or reduced discounts, making the cost gap widen in favor of owned hardware.
Contends that older hardware is perfectly capable for most workloads and that perceived sluggishness is caused by increasingly bloated software, not aging CPUs or RAM. This reframes the upgrade question: instead of buying new hardware or renting cloud instances, developers should scrutinize whether their software stack is unnecessarily heavy.
The editorial synthesis notes that the sustainability conversation around e-waste has moved from niche concern to mainstream awareness, framing hardware longevity as not merely a cost optimization but an environmental responsibility. The post's timing and resonance (600+ HN points) suggest the developer community increasingly sees premature hardware replacement as wasteful in both economic and ecological terms.
A blog post titled "Hold on to Your Hardware" from kévin.com surfaced on Hacker News this week and quickly climbed to over 600 points — a signal that the argument resonated deeply with the developer community. The post makes a straightforward case: most developers and small teams are replacing hardware or migrating to cloud services far earlier than the hardware's actual useful life demands.
The argument lands at a moment when cloud costs are rising, subscription fatigue is real, and the sustainability conversation around e-waste has moved from niche concern to mainstream awareness. The core thesis is that hardware longevity is underrated because the industry's incentive structures — cloud vendor lock-in, planned obsolescence, software bloat — all push toward premature replacement.
The post joins a growing body of practitioner writing that pushes back against the default assumption that newer is better and cloud is inevitable.
### The economics have shifted
For years, the conventional wisdom was clear: let someone else manage the hardware, move to the cloud, and focus on your code. That advice made sense when AWS, GCP, and Azure were aggressively pricing to gain market share. But the era of cheap cloud compute is over. AWS has raised prices on multiple services, Azure's margins are widening, and Google Cloud's "sustained use discounts" aren't what they used to be. A developer running a modest workload on a 5-year-old server at home or in a colo is now paying a fraction of what the equivalent cloud instance costs — and the gap is widening, not narrowing.
The numbers are stark for small-to-medium workloads. A used server with 64GB RAM and a decent Xeon from 2019 costs $200-400 on eBay. The equivalent cloud instance runs $150-300/month. The hardware pays for itself in weeks, not years.
### Software bloat is the real bottleneck
The post touches on a point that experienced engineers know intuitively but rarely articulate: most "slow hardware" complaints are actually slow software complaints. A 2018-era laptop running a lightweight Linux distribution feels snappier than a 2024 MacBook running Electron apps. The browser alone consumes more RAM than entire operating systems did a decade ago.
The uncomfortable truth is that much of the software industry's growth depends on hardware churn — if your old machine keeps working fine, there's less pressure to buy a new one or rent more cloud capacity. This creates a misaligned incentive where software gets heavier not because it needs to, but because the assumption is that hardware will absorb the cost.
This resonated strongly on Hacker News, where commenters shared examples of machines running for 8, 10, even 15 years with minimal maintenance. Server hardware in particular is built for longevity — enterprise drives, ECC RAM, and redundant power supplies are designed for decade-plus lifespans.
### The environmental angle is no longer optional
E-waste is now the fastest-growing waste stream globally. The average laptop generates roughly 300-400kg of CO2 during manufacturing — dwarfing its lifetime energy consumption. Every year of additional use from existing hardware avoids that manufacturing footprint entirely. For companies making ESG commitments, extending hardware life is one of the highest-leverage moves available — and unlike carbon credits, it's impossible to fake.
### Self-hosting is having a moment
The convergence of rising cloud costs, mature self-hosting tools (Coolify, Dokku, Kamal), and cheap used enterprise hardware is creating a genuine inflection point. If you're running a SaaS with under 10,000 users, a single refurbished server can handle your entire stack — database, application, cache, and backups — for a one-time cost less than one month of your current AWS bill.
This doesn't mean everyone should flee the cloud. Autoscaling, global distribution, and managed services still justify cloud spending for many workloads. But the default assumption — that cloud is always the right call — deserves serious reexamination for small and medium workloads.
### Practical steps for holding on
If you're evaluating whether to extend your hardware's life, the checklist is short:
- Storage: SSDs don't wear out as fast as the industry suggests. A drive with 30-40% of its rated TBW remaining has years left for typical workloads. Monitor with SMART data, replace proactively when wear indicators cross 80%. - RAM: If your workload fits in current RAM, there's no upgrade needed. If it doesn't, used ECC DIMMs are absurdly cheap. - CPU: Unless you're doing ML training or heavy compilation, a 2018-era processor handles modern web workloads without breaking a sweat. Single-threaded performance gains have been marginal for years. - Network: This is the one area where upgrades matter. If your hardware only supports gigabit ethernet and your workload is network-bound, a $30 PCIe card solves it.
### The maintenance trap to avoid
The counterargument deserves honest acknowledgment: old hardware does fail, and the failure mode is often sudden. The answer isn't to run without backups on aging disks and hope for the best. The answer is to treat old hardware like you'd treat any production system: monitor it, maintain it, and have a failover plan. The tools for this — Prometheus, Grafana, simple cron scripts checking SMART data — are free and well-documented.
The real risk isn't hardware failure. It's that maintaining your own hardware requires a skill set that many developers never developed because cloud abstractions made it unnecessary. If you don't know how to replace a failed drive in a RAID array, the cloud premium might be worth paying.
The pendulum between centralized and decentralized computing has been swinging for decades. Mainframes gave way to PCs, PCs gave way to thin clients and cloud, and now the economics are nudging a subset of workloads back toward local hardware. This doesn't signal the end of cloud computing any more than cloud signaled the end of on-prem. But it does mean the reflexive "just put it in the cloud" answer is getting more expensive, and the "just keep what you have" answer is getting more viable. For senior engineers evaluating infrastructure decisions, the most valuable skill right now might be knowing when the boring option — keeping what works — is the right one.
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.