Draws a direct parallel between Western manufacturing offshoring (1970s onward) and modern software development trends. Argues that low-code platforms, managed services, and AI code generation are compounding layers of abstraction that don't just raise the floor but thin the population who understands what's beneath it — mirroring how the West didn't just lose factories but lost the ability to build factories.
Emphasizes that no single tool — whether low-code, managed services, or AI — is dangerous on its own. The concern is structural: the compounding effect creates an industry that increasingly can't explain its own systems, just as COVID-19 revealed that reshoring manufacturing required billions to relearn forgotten institutional knowledge.
The editorial highlights that the essay's power lies in reframing the AI-coding debate away from 'will AI take jobs' toward a structural question about organizational capability. Notes the manufacturing parallel is 'uncomfortably precise,' citing the U.S. share of global manufacturing dropping from 28% in 1985 to roughly 16% today as evidence the pattern is real.
The post garnered 1,040 points and 716 comments — exceptional engagement for a think piece with no product launch or security disclosure. This level of response suggests the framing struck a nerve with practitioners who recognize the pattern in their own organizations but hadn't had a structural vocabulary for it beyond the usual 'AI taking jobs' discourse.
An essay titled "The West Forgot How to Make Things. Now It's Forgetting How to Code" hit the top of Hacker News this week, pulling over 1,040 points and hundreds of comments — numbers that put it in rare company for a think piece with no product launch or security disclosure attached. The author draws a direct line between the deindustrialization of Western economies over the past 40 years and what they see happening now in software development.
The argument runs like this: from the 1970s onward, Western nations systematically offshored manufacturing. First the low-margin work, then the mid-tier, then — almost without noticing — the institutional knowledge required to do any of it. The U.S. and U.K. didn't just lose factories; they lost the *ability to build factories*. Engineers who understood metallurgy, tooling, and process control retired. The educational pipelines dried up. When COVID-19 exposed how fragile the resulting supply chains were, the response was billions in reshoring subsidies — essentially paying to relearn what had been forgotten.
The essay's core claim is that software is now on the same trajectory: each layer of abstraction doesn't just raise the floor, it thins the population of people who understand what's below it. Low-code platforms, managed services, and now AI code generation are presented not as individual tools but as a compounding trend. The worry isn't that any single tool is dangerous — it's that the cumulative effect is an industry that increasingly can't explain its own systems.
The essay resonated because it reframes the AI-and-coding debate away from the tired "will AI take developer jobs" framing and toward a structural question: what happens to organizational capability when understanding becomes optional?
This isn't theoretical. The manufacturing parallel is uncomfortably precise. The U.S. share of global manufacturing output dropped from 28% in 1985 to roughly 16% today, according to UN data. More telling than the output numbers is the skills gap: a 2023 Deloitte study found 2.1 million U.S. manufacturing jobs could go unfilled by 2030 due to a shortage of workers with the necessary technical knowledge. The capability didn't move — it evaporated.
In software, the early signs look similar. Stack Overflow's 2024 developer survey showed that 76% of developers use or plan to use AI tools in their workflow. GitHub reports that Copilot writes over 46% of code in files where it's enabled. These numbers are celebrated as productivity gains, and they are. But productivity and understanding are different metrics.
The HN comment thread — often more interesting than the essays it discusses — split into roughly three camps. The first agreed with the thesis wholesale and pointed to junior developers who can prompt an LLM into producing a working API but can't explain what an HTTP status code means. The second camp pushed back hard: every abstraction layer in computing history, from assembly to C to Python to React, triggered identical complaints, and the industry kept growing. The third camp offered the most nuanced take: the difference this time is speed. Previous abstractions took a decade to commoditize; AI is compressing that timeline to months, which may not give institutions enough time to adapt their training and hiring.
That speed argument deserves attention. When companies moved from on-prem to cloud, they had years to retrain ops teams into DevOps engineers. When jQuery gave way to React, developers had a migration path. AI-assisted coding isn't replacing one tool with another — it's changing the relationship between the developer and the artifact. You don't need to understand a line of code to generate it, and more importantly, you don't get the *incidental learning* that comes from writing it by hand.
The practical implications split along two axes: individual and organizational.
For individual developers, the essay is a useful corrective to the "just vibe-code everything" enthusiasm. AI code generation is leverage, not replacement — but leverage amplifies whatever understanding you bring to the table. Zero understanding, leveraged, is still zero. The developers who will thrive are those who use AI to move faster through problems they genuinely comprehend, not those who use it to avoid comprehension entirely. If you can't read a stack trace, Copilot won't save you when production breaks at 3 AM.
Concretely: if your team has stopped doing code reviews because "the AI wrote it," you've already started down the deindustrialization path. AI-generated code needs *more* review, not less, precisely because no human built the mental model while writing it.
For engineering organizations, the question is whether you're building a team that understands your systems or a team that operates your systems. These sound similar but diverge sharply under stress. An operations-only team can keep the lights on during normal conditions but collapses during novel failures — the same way a country that assembles imported components can't pivot when the supply chain breaks.
The tactical response isn't to ban AI tools (good luck). It's to invest deliberately in what manufacturing calls "process knowledge" — the deep understanding of *why* systems work, not just *that* they work. This means:
- Pair programming with AI outputs: Treat generated code as a first draft that a human must understand and defend, not a finished product. - Architectural decision records: Document the reasoning behind system design, so institutional knowledge survives personnel changes. - Debugging-first interviews: Shift hiring toward candidates who can reason about failure modes, not just produce working code. The ability to write code is becoming commoditized; the ability to debug it is not. - Rotation through the stack: Periodically have developers work one layer below their usual abstraction. Frontend engineers should understand the network layer. Backend engineers should read query plans. This is the intellectual equivalent of reshoring.
The manufacturing analogy isn't perfect — software doesn't require rare earth minerals or ocean freight, and the barriers to "reshoring" understanding are lower than rebuilding a chip fab. But the essay's core warning stands: capability is a use-it-or-lose-it asset, whether you're making steel or making software. The West spent decades learning that lesson in manufacturing at a cost of trillions. The question for the software industry is whether it will learn from that history or repeat it at compiler speed. The 1,040-point Hacker News discussion suggests that at least a few thousand developers are paying attention. Whether their organizations are is another matter entirely.
I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documen
“Money was never the constraint. Knowledge was.”The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.Even when AI is understandably used due to language fluency, I
> They can’t tell you what the AI got wrong.AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.This is not fun. It has no flow.
I highly question the ability of companies to gauge the level of experience of any dev.The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.Technically you don't have to be an employed
Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.
The real issue, in my view, is not AI itself.The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.Short-term cost cutting leads to less junior hiring, and remove