The Deskilling Parallel: Manufacturing Hollowed Out. Is Coding Next?

5 min read 1 source clear_take
├── "AI coding tools are replicating the same capability-destroying pattern that deindustrialized Western manufacturing"
│  └── milkglass (Tech Trenches) → read

Argues that the economic logic of offshoring manufacturing — chase short-term cost savings, lose institutional knowledge — is now playing out in software via AI coding assistants. Draws on decades of manufacturing history including U.S. PCB offshoring and UK post-Thatcher deindustrialization to show how capability loss compounds: you lose implementers, then process knowledge, then eventually the ability to understand what's been built.

├── "The junior developer pipeline is the critical vulnerability — eliminating entry-level roles destroys the talent supply chain"
│  └── milkglass (Tech Trenches) → read

Points to CTOs publicly planning to reduce junior engineering headcount because AI can handle that work, and argues this mirrors the loss of apprenticeship pathways in manufacturing. Without junior roles as a training ground, the industry loses the mechanism by which senior engineers are produced, creating a generational knowledge gap that compounds over time.

├── "AI-assisted coding shifts engineers from creators to reviewers, eroding deep technical understanding"
│  └── top10.dev editorial (top10.dev) → read below

Highlights the feedback loop of capability loss: when AI writes implementation and engineers review it like managers reviewing contractor work, they gradually lose the hands-on understanding needed to evaluate quality, debug failures, or architect systems. Compares this to how U.S. design engineers eventually lost knowledge of what was physically manufacturable after process engineering was offshored.

└── "The manufacturing analogy reveals that capability loss is a strategic national vulnerability, not just an industry concern"
  └── milkglass (Tech Trenches) → read

Frames the issue through the lens of the Pentagon's inability to source trusted domestic chip fabrication and the $52 billion CHIPS Act response, arguing that software skill erosion could produce analogous strategic dependencies. The implication is that losing domestic software engineering capability is not merely a labor market issue but a matter of national security and technological sovereignty.

What happened

An essay titled *The West Forgot How to Make Things. Now It's Forgetting How to Code* hit the front page of Hacker News this week, pulling 249 points and a sprawling comment thread. Published on Tech Trenches, the piece argues that the same economic logic that gutted Western manufacturing — chase short-term cost savings, offshore the hard parts, lose the institutional knowledge — is now playing out in software engineering, accelerated by AI coding assistants.

The thesis is blunt: when you stop making things yourself, you eventually lose the ability to understand what you've made. The author draws on decades of manufacturing history — the hollowing out of the U.S. industrial base, the UK's post-Thatcher deindustrialization, the slow realization that supply chain dependence is strategic vulnerability — and maps those patterns onto the current moment in software.

The essay arrives at a time when "vibe coding" has entered mainstream developer vocabulary, when GitHub Copilot and its competitors handle an increasing share of line-by-line implementation, and when several prominent CTOs have publicly stated they plan to reduce junior engineering headcount because AI can handle the work those roles used to do.

Why it matters

The manufacturing analogy isn't perfect, but it's uncomfortably precise in one dimension: the feedback loop of capability loss. When U.S. firms offshored PCB manufacturing in the 1990s, they didn't just lose factory jobs. They lost the process engineers who understood yield optimization, the materials scientists who could troubleshoot solder failures, and eventually the design engineers who knew what was physically manufacturable. The Pentagon discovered this the hard way when it couldn't source trusted chip fabrication domestically — a gap that took the $52 billion CHIPS Act to even begin addressing.

The software parallel: when AI writes your implementation and you review it like a manager reviews a contractor's work, you retain the authority to approve but gradually lose the intuition to evaluate. Anyone who has managed an outsourced codebase knows this feeling. You can read the pull request. You can run the tests. But you can't smell the architectural decay until it's already metastasized.

The Hacker News discussion, predictably, split into camps. One group — largely senior engineers and engineering managers — resonated with the deskilling argument. They pointed to concrete examples: junior developers who can prompt an LLM to produce a working React component but can't explain the reconciliation algorithm underneath it, teams that ship AI-generated code faster but file more production incidents because nobody deeply understands the error-handling paths.

The opposing camp pushed back on what they see as the latest iteration of "old guard resists new tools." Their argument: every productivity leap in software — from assembly to C, from C to Python, from manual deployment to CI/CD — triggered the same fear that abstraction would destroy understanding. And every time, the industry adapted. Programmers didn't need to understand vacuum tubes to build the internet. The question isn't whether abstraction reduces low-level knowledge — it always does — but whether the *rate* of abstraction is outpacing the industry's ability to maintain critical understanding at the layers that matter.

This is where the manufacturing analogy sharpens. The shift from hand-soldering to pick-and-place machines was a tool upgrade — the engineers still understood circuits. The shift from domestic manufacturing to offshoring was a *knowledge transfer* — the engineers eventually didn't. AI-assisted coding sits ambiguously between these two models, and which one it resembles depends almost entirely on how teams choose to use it.

The institutional knowledge problem

The essay's strongest point isn't about individual developers — it's about organizations. A senior engineer who uses Copilot as a sophisticated autocomplete, reading every suggestion critically, retains and even sharpens their skills. But organizations don't optimize for that. They optimize for velocity metrics, ticket throughput, and headcount efficiency.

When a VP of Engineering sees that a team of 5 with AI tools ships as much as a team of 10 without them, the institutional response is to cut to 5 — not to use 10 engineers to build something twice as thoughtful. This is the same logic that made offshoring irresistible in manufacturing. The spreadsheet wins until the day it doesn't, and by then the people who could have told you why are gone.

There's a concrete version of this already emerging. Several large enterprises have reported that their AI-generated codebases are harder to debug during incidents because the code, while functional, doesn't follow the implicit conventions and mental models that human-written code in those organizations historically did. The code works. The code passes tests. The code is nobody's, in the sense that no one on the team built the mental model that produced it.

This is the "making things" the title refers to. Not the mechanical act of typing code, but the cognitive act of *holding a system in your head* — understanding why this module talks to that service, why this timeout is set to 30 seconds and not 10, why this particular race condition was solved with a lock rather than a queue. That knowledge lives in the act of building. When you delegate building, you delegate understanding.

What this means for your stack

If you're a senior developer or tech lead, the actionable takeaway isn't "stop using AI tools" — that ship has sailed and the tools genuinely help. It's about being deliberate about where you let AI handle implementation and where you insist on humans doing the work from scratch.

Three concrete practices worth adopting:

1. Rotate "no-AI" sprints for critical systems. For your core domain logic, payment processing, auth flows, and data pipeline orchestration, periodically have engineers implement changes without AI assistance. This is the equivalent of fire drills — you're testing whether your team can still operate the system they own.

2. Treat AI-generated code as untrusted vendor code. Apply the same review rigor you'd apply to a third-party library. If your team can't explain *why* the generated code works — not just *that* it works — it shouldn't merge. This is the review standard that separates augmentation from outsourcing.

3. Protect junior developer learning paths. The most dangerous long-term effect of AI coding tools isn't on senior engineers who already have deep mental models — it's on juniors who never build those models in the first place. If your organization uses AI to replace junior engineering work rather than to accelerate junior engineering *learning*, you're building a pipeline that produces seniors who were never actually junior.

Looking ahead

The manufacturing deindustrialization lesson took 30 years to fully manifest and is costing hundreds of billions to partially reverse. Software moves faster — both the adoption curves and the consequences. The West didn't decide to forget how to make things; it made a series of individually rational decisions that collectively produced an irrational outcome. The question for software engineering leadership isn't whether AI coding tools are good (they are) or whether they'll replace developers (they won't, yet). It's whether your organization is using them in a way that builds capability or just borrows it. The essay's 249-point Hacker News score suggests a lot of practitioners already sense which way their shop is headed.

Hacker News 1040 pts 716 comments

The West Forgot How to Make Things. Now It's Forgetting How to Code

→ read on Hacker News
jdw64 · Hacker News

The real issue, in my view, is not AI itself.The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.Short-term cost cutting leads to less junior hiring, and remove

liendolucas · Hacker News

I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documen

TonyAlicea10 · Hacker News

“Money was never the constraint. Knowledge was.”The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.Even when AI is understandably used due to language fluency, I

Animats · Hacker News

> They can’t tell you what the AI got wrong.AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.This is not fun. It has no flow.

mawadev · Hacker News

I highly question the ability of companies to gauge the level of experience of any dev.The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.Technically you don't have to be an employed

// share this

// get daily digest

Top 10 dev stories every morning at 8am UTC. AI-curated. Retro terminal HTML email.