Essential insights from Hacker News discussions

Survey: a third of senior developers say over half their code is AI-generated

This discussion reflects a diverse range of opinions on the use of AI in software development, particularly regarding "vibe coding" and AI-assisted coding. The central themes revolve around the perceived benefits and drawbacks of these tools, their impact on developer skill and productivity, and the evolving definition of "vibe coding" itself.

Skepticism and Concerns about AI-Generated Code Quality

A significant portion of the discussion expresses caution and outright skepticism regarding the reliability and quality of code generated by AI. Many developers report instances of AI producing "dumb" or "devious" bugs, fabricating APIs, or making fundamental mathematical errors.

  • Erroneous and Inconsistent Output: Developers frequently encountered instances of AI making mistakes that they themselves would not. "binarymax" shares, "Even then I’ve mostly given up. I’ve seen LLMs change from snake case to camel case for a single method and leave the rest untouched. I’ve seen them completely fabricate APIs to non existent libraries. I’ve seen them get mathematical formulae completely wrong."
  • Difficulty in Review and Debugging: The effort required to thoroughly review and debug AI-generated code is often seen as negating the supposed time savings. "Gigachad" states, "Reviewing code properly is so much harder than writing it from scratch." "Gigachad" further elaborates on a common sentiment: "If I have to know and understand the code being generated, it's easier to just write it myself. The AI tools can just spit out function names and tools I don't know off the top of my head, and the only way to check they are correct is to go look up the documentation, and at that point I've just done the hard work I wanted to avoid." Similarly, "platevoltage" echoes, "This is exactly my experience, but I guess generating code with depreciated methods is useful for some people."
  • Loss of Control and Nuance: Some users feel that AI struggles with complex systems and nuanced requirements, leading to suboptimal or incorrect implementations. "Chris_Newton" articulates this: "The actual work I do is too deep in business knowledge to be AI coded directly, but I do use it to write tests to cover various edge cases, trace current usage of existing code, and so on. ... But as 'manoDev' says in the parent comment, deeper work is often a specification problem. The valuable part is often figuring out the what and the why rather than the how, and so far that isn’t something AI has been very good at."

The Value of Struggle and Skill Development

A counter-argument to the efficiency gains offered by AI is the intrinsic value of struggle and the necessary process of problem-solving for developing expertise.

  • Expertise Through Struggle: Some argue that avoiding difficulty hinders skill development. "globnomulous" posits, "> Anything that makes development faster or easier is going to be welcomed by a good developer. I strongly disagree. Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it."
  • Stunted Professional Growth: There's a concern that over-reliance on AI can lead to a stagnation of a developer's learning and growth. "jasonjmcghee" expresses this fear: "If you build a product with it, suddenly everyone is an engineering manager and no one is an expert on it. And growth as an engineer is stunted." "darkwater" uses an analogy: "using AI agent IMO is like going to bike in the mountain with an electrical bike. Yes, you keep seeing the wonderful vistas but you are not really training your legs."

The Evolving Definition of "Vibe Coding"

A significant portion of the discussion is dedicated to clarifying and debating the term "vibe coding," with consensus shifting towards its broader interpretation as AI-assisted development rather than its original, more extreme meaning of purely AI-driven code generation.

  • Ambiguity and Interchangeability: Several users noted the term's loose usage. "csbrooks" asks, "Is "vibe coding" synonymous with using AI code-generation tools now? I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM." "ladyprestor" agrees: "Yeah, for some reason the term has been used interchangeably for a while, which is making it very hard to have a conversation about it since many people think vibe coding is just using AI to assist you."
  • Broadening Definition: The understanding of "vibe coding" has clearly expanded beyond its initial, more pejorative connotations. "crazygringo" suggests, "I think what happened is that a lot of people started dismissing all LLM code creation as "vibe coding" because those people were anti-LLM, and so the term itself became an easy umbrella pejorative." and "Also because we don't really have any other good memorable term for describing code built entirely with LLMs from the ground up, separate from mere autocomplete AI or using LLMs to work on established codebases."
  • "Agentic Coding" as an Alternative: "actsasbuffoon" offers a more precise term: "“Agentic coding” is probably more accurate, though many people (fairly) find the term “Agentic” to be buzz-wordy and obnoxious."
  • Delegating vs. Abdicating Responsibility: Some users draw a distinction between using AI as a tool for assistance and "abdicating" responsibility. "biglyburrito" defines it as: "My personal definition of "vibe coding" is when a developer delegates -- abdicates, really -- responsibility for understanding & testing what AI-generated code is doing and/or how that result is achieved. I consider it something that's separate from & inferior to using AI as a development tool."

AI as a Productivity Booster for Specific Tasks

Despite the concerns, many users acknowledge AI's utility for specific, often mundane or repetitive, coding tasks.

  • Boilerplate and Scaffolding: AI is frequently cited as helpful for generating boilerplate code, scaffolding new projects, and creating unit tests. "marcyb5st" states, "I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere." "com2kid" adds, "The useful part is generating the mocks. The various auto mocking frameworks are so hit or miss I end up having to manually make mocks which is time consuming and boring. LLMs help out dramatically and save literally hours of boring error prone work."
  • Context Switching and Information Retrieval: For developers working across diverse domains or on unfamiliar codebases, AI can act as a helpful assistant for quickly understanding context or finding information. "INTPenis" explains, "My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range. So no I don't mind it at all... it's great at context switching between different topics." "felipeerias" agrees, "It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done."
  • Assistance for Obscure Languages/Tools: When working with less familiar languages or tools, AI can provide a starting point or syntax assistance. "LarryMade2" shares, "I tried it - didn't like it. Had an LLM work on a backup script since I don't use Bash very often. Took a bunch of learning the quirks of bash to get the code working properly. While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done."

The Role of the Developer in an AI-Assisted Workflow

The consensus is that AI tools are most effective when used by experienced developers who can guide, validate, and correct the AI's output. The developer shifts from being solely a code writer to a director, reviewer, and editor.

  • Human Oversight is Crucial: "binarymax" explicitly states their return to hand-coding due to the issues, but others recognize the need for oversight. "baq" notes, "The original definition of vibe coding meant that you just let the agent write everything, and if it works then you commit it. Your code review and security check turned this from vibe coding into something else."
  • "Coding in the Small" vs. "Coding in the Large": AI is generally seen as more effective for small, well-defined tasks ("coding in the small") rather than entire features or complex architectural designs ("coding in the large"). "manoDev" explains, "“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway."
  • Expertise in Prompting: The skill of crafting effective prompts and guiding the AI has become a new form of developer expertise. "jmull" remarks, "For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt." "matula" shares, "I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices."

Cost and Business Implications

The financial aspect of AI tools, particularly large language models (LLMs), is also a point of discussion, with concerns about escalating costs and the business case for their adoption.

  • High Costs of Usage: Some users reported significant daily costs for using LLMs, sparking debate about their economic viability. "philip1209" mentions, "I looked at our anthropic bill this week. Saw that one of our best engineers was spending $300/day on Claude. Leadership was psyched about it." "merlincorey" notes the potential for recurring high costs, "Claude is making $72k a year for a consistent $300/day spend."
  • Return on Investment: Questions arise about whether the productivity gains justify the expenses, especially when factoring in the time spent on review and correction. "pydry" expresses frustration: "I was told that I wasnt using it enough by one arm of the company and that I was spending too much by another. Meanwhile, try as I might I couldnt prevent it from being useless."
  • Vendor Lock-in and Rate Hikes: Concerns were raised about future price increases or limitations imposed by AI providers. "PhantomHour" speculates, "One imagines Leadership won't be so pleased after the inevitably price hike."

The Debugging Debate: Printf vs. Interactive Debuggers

A tangential but significant sub-theme is the long-standing debate between using printf style logging versus interactive debuggers. While not directly about AI, it touches upon developer workflow and the evolution of tools.

  • "Old Fogeys" and printf: Some perceived older developers as sticking to printf out of stubbornness, while others defended it as a valid and sometimes superior method, especially for concurrent programming or when attaching a debugger is difficult. "unconed" suggests, "The old fogeys don't rely on printf because they can't use a debugger, but because a debugger stops the entire program and requires you to go step by step. Printf gives you an entire trace or log you can glance at, giving you a bird's eye view of entire processes." Conversely, "lordnacho" argues for logging frameworks, and "cbanek" supports logging as a more robust method.
  • Debugger Utility: Others highlight the power of interactive debuggers for stepping through code, inspecting variables, and understanding complex call stacks. "shmerl" states, "debugger is really powerful if you use it more than superficially," and "TheRoque" emphasizes the inefficiency of rerunning to insert print statements. "LandR" notes the utility of conditional breakpoints.
  • Complementary Tools: Many concluded that both methods have their place and are not mutually exclusive. "VectorLock" summarizes, "Interactive debuggers and printf() are both completely valid and have separate use-cases with some overlap. If you're trying to use, or trying to get people to use, exclusively one, you've got some things to think about."

The Survey and its Methodology

The survey data itself was also a subject of scrutiny, with questions raised about sample size, methodology, and potential biases.

  • Statistical Significance: Questions were raised about the representativeness of the survey sample. "mr90210" commented, "We have got to stop. In a universe of well over 25 million programmers a sample of 791 is not significant enough to justify such headlines." However, others countered with explanations of statistical significance, stating that sample size is not the sole determinant of validity. "oasisaimlessly" explained, "You should read more about statistical significance. Under some reasonable assumptions, you can confidently certain deduce things with small sample sizes." "spmurrayzzz" provided a more specific statistical perspective.
  • Self-Reporting Bias: The possibility that participants self-selected based on their enthusiasm for AI was highlighted. "goosejuice" noted, "This is self reported unless I missed something. I bet that skews these results quite a bit." "platevoltage" added, "I would imagine a "Senior Developer" who is super into AI assisted coding would be more likely to come across this survey and want to participate."
  • Marketing vs. Data: There was a sentiment that the article might be promotional rather than purely informative. "pera" stated, "This submission was written by the marketing department of a company with commercial interests on the given topic and with almost no information about the "survey" itself, it's essentially blogspam." "thegrim33" also pointed out the selective reporting of positive aspects.