The Hacker News discussion about AI's impact on programming reveals several key themes:
Economic Impacts and Productivity
A central debate revolves around how AI will affect the demand for programmers and overall productivity. Some users suggest, echoing Jevons' paradox, that making programming cheaper and easier will lead to an increase in demand for software, thus potentially increasing the number of programmers needed, albeit for different tasks. Others are more skeptical, arguing that the "cheaper" aspect is not definitively proven when factoring in rework and other costs. The concept of a "deflationary spiral" is mentioned, but countered by the idea that people value usable systems today over marginally cheaper ones tomorrow.
- Djoldman posits: "My intuition is that the total number will increase but that the programs we write will be substantially different."
- Amonith also offers a contrasting view: "I'd argue that in the past we needed more programmers for more complicated stuff... now we need many more people to glue some libraries and external solutions together."
- Recroad questions the premise of cost reduction: "Nothing here became cheaper..."
- Mlhpdx observes a potential disconnect between perception and reality: "I suspect the reality around programming will be the same - a chasm between perception and reality around the cost."
Shifting Skill Requirements and the Value of Judgment
A significant theme is the potential shift in required skills within the programming profession. Many participants believe that the "how to build" aspect will become commoditized, with value migrating towards "what to build," understanding customer needs, architectural decisions, and general judgment. This "judgment" is seen by some as the core differentiator that AI cannot easily replicate, being built on experience and a collection of personal reference points. There's a discussion on how to market oneself when these "judgment" skills are most valued.
- Djoldman suggests: "the economy will demand fewer programmers for the previous set of demanded programs. However. The set of demanded programs will likely evolve."
- Gbacon asks: "Which programs will the new tech make profitable... to write?"
- Michaelfeathers relates this to the Peltzman effect: "The safer you make something the more risks people will take... ease of development increases the complexity attempted."
- Ramesh31 emphasizes the irreplaceable nature of experience: "It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced."
- Afpx wonders: "How would one even market oneself in a world where this is what is most valued?"
- Mjr00 connects this to existing expectations: "That's basically the job description of any senior software development role, at least at any place I've worked."
- JustExAWS shares a personal trajectory: "I saw that as an enterprise dev in a second tier tech city... no matter what I learned well - mobile, web, “full stack development”, or even “cloud”, they were all commodities... I did start focusing on just what the author said and took a chance on leaving a full time salaried job... to lead a major initiative..."
Reliability, Quality, and the "Last Mile" Problem
The reliability and quality of AI-generated code is a recurring point of contention. Several users draw parallels to compilers, noting that while compilers reliably solve most problems, current AI tools are unreliable and can sometimes produce worse results than starting from scratch. The "last mile" problem, where the final percentage of effort to achieve perfection is exponentially harder, is also discussed in the context of AI's development, drawing parallels to fields like nuclear waste disposal or cancer cures. The idea that AI might primarily focus on generating tests and identifying simple errors, rather than complex logic, is also mentioned.
- Bpt3 states: "To put it in your words, I don't think LLMs get us 90% of the way there because they get us almost 100% of the way there sometimes, and other times less than 0%."
- Jensson differentiates: "Compilers reliably solves 90% of your problems, LLM unreliably solves 100% of your problems."
- Bluefirebrand highlights the consequence of unreliability: "If it's not reliable then the problem is not solved. You've just moved the problem from 'I can't solve this' to 'I can't trust if the LLM solved this properly'."
- Rkozik1989 brings up the "last miles" challenge: "Are there technical hurdles... the level complexity to complete the race is exponentially higher than it was at the start."
- HumblyTossed notes: "It costs 10% to get 90% of the way there. Nobody ever wants to spend the remaking 90% to get us all the way there."
Commoditization and the Future of the Profession
There's a strong sentiment that programming itself is becoming commoditized, with AI accelerating this trend. This leads to concerns about the devaluation of core coding skills and a potential shift towards roles that manage or clean up AI-generated code. Some users predict that AI will replace outsourcing and junior developers, with senior developers needing to adapt or face obsolescence. The analogy to the increasing availability of tools like chainsaws is used to illustrate the potential for both great benefit and significant risk if not used properly.
- Hollowonepl prefers the term "Commoditization of Software Engineering."
- Sublinear suggests: "AI replacing outsourcing and (sadly) junior SWEs is more likely than it just eliminating coding jobs across the board."
- Bluefirebrand makes a passionate plea: "Every time you shrug and say 'yeah the LLM does ok junior level work' you are part of the goddamn problem."
- Alphazard warns: "The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward..."
The "Soul" of Code and Subjective Value
Beyond the economic and practical considerations, some participants reflect on the subjective aspects of programming. The idea of "soul-less code" is introduced, stemming from the loss of ownership and enjoyment when code is primarily machine-generated. This raises questions about whether companies heavily reliant on AI-generated code will ultimately have an advantage over those that value traditional approaches. There's also a concern about vendor lock-in if developers no longer understand the codebases they are working with.
- Gchamonlive laments: "If you take both of these out [ownership and enjoyment], you create what I could only describe as soul-less code. The impact of soul-less code is not obvious, not measurable but I'd argue quite real."
- Hvb2 ponders: "I'm curious about is if those companies that go all in her to the state where they have the source but they now have vendor lock in with the ai vendor. Since no dev understands the code anymore."
- Logicchains offers a counterpoint: "Feelings of ownership also cause problems in software engineering, i.e. people being unwilling to make changes to their code that a reasonable person would see as improvements..."
Regulation and the Future of Software Engineering
The discussion touches on the potential need for increased regulation in software engineering, drawing parallels to other engineering disciplines. This is partly driven by the idea that AI might lower the barrier to entry for programming, leading to more self-taught developers and a potential decline in educational standards. The increasing visibility of code divergence and the role of AI in auditing are also mentioned in this context.
- Sublinear states: "Long term, software engineering will have to be more tightly regulated like the rest of engineering."
- Sublinear also suggests: "I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities."
- Sublinear proposes: "Maybe AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI."
The Marxist Lens of Overproduction
One user frames the AI-driven increase in software production through a Marxist lens, discussing the "crisis of overproduction." This theory suggests that the economy can produce more goods than the market can profitably absorb, leading to economic instability. Applied to software, this means LLMs could enable the creation of more software than businesses can realistically utilize, potentially leading to wage stagnation and increased unemployment among developers.
- Basfo articulates the concern: "LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb... the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis."
Neurodiversity and Skill Compatibility
A concern is raised by a neurodivergent individual about their skill set, which thrives on hyperfocus and compulsive systems building, potentially being devalued by the shift towards AI-augmented programming and a focus on "plumbing code" rather than complex problem-solving.
- Harimau777 asks: "What advice would you all have for programmers who aren't compatable with AI augmented programming?"