This Hacker News discussion revolves around the potential liability of AI companies for generating false or defamatory information presented by their models, particularly in seemingly authoritative contexts like Google Search. The conversation highlights a conflict between the perceived "hallucination" nature of AI and the real-world consequences of misinformation.
AI's Lack of Truthfulness and the Expectation of Accuracy
A central theme is the inherent untruthfulness of current AI models and how this clashes with their widespread deployment. Users question whether companies are adequately accounting for this limitation.
- "blibble" states: "the "AI" bullshitters need to be liable for this type of wilful defamation... and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people".
- "aDyslecticCrow" argues: "AI is a used as responsibility diversion machine... If an AI does it is it okay because nobody is responsible?"
- "mindslight" observes: "One has to wonder if one of the main innovations driving "AI" is the complete lack of accountability and even shame."
Disclaimers vs. Authoritative Presentation
A significant point of contention is whether disclaimers absolve AI providers of responsibility, especially when the output appears authoritatively within interfaces like Google Search results.
- "oxguy3" draws an analogy: "The AI summaries in Google aren't presented as wild hallucinations; they show up in an authoritative looking box as an answer to the query you just typed. The New York Times wouldn't be able to get out of libel suits by adding a tiny disclaimer to their masthead; why should it be different for Google?"
- "atq2119" counters the idea of disclaimers being sufficient: "Google aren't advertising their search as "for entertainment purposes only" though. And even if they did, it wouldn't really matter. The way Google search is overwhelmingly used in practice, misinformation spread by it is a public hazard and needs to be treated as such."
- "margalabargala" uses strong analogies to illustrate the insufficiency of disclaimers: "Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain."
The Nature of AI Errors and Misattribution
The discussion touches upon how AI models arrive at their erroneous outputs, with specific examples of misattribution and factual inaccuracies being cited.
- "jsheard" points out a specific instance: "Reading this I assumed it was down to the AI confusing two different Benn Jordans, but nope, the guy who actually published that video is called Ryan McBeth. How does that even happen?"
- "slightwinder" elaborates on the potential cause: "Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country)."
- "trjordan" hypothesizes the mechanism behind these errors: "Google's AI answers aren't magic -- they're just summarizing across searches. In this case, "Israel" + "Jordan" pulled back a video with opposite views than the author."
Accountability and Legal Liability for AI Outputs
Many participants argue that companies should be held legally accountable for the content their AI systems generate, a stance that is met with some debate regarding the precedent and potential for misuse of such liability.
- "mattbuilds" asserts: "Thatβs a false equivalency, sorry that some of us think companies should actually be responsible for the things they produce."
- "deepvibrations" emphasizes the need for legal intervention: "The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI."
- "aDyslecticCrow" stresses the potential for harm: "What about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsible?"
- "simmerup" differentiates between linking to external content and generating it: "In my mind the Google result page is like a public space. ... But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck."
The "Weaponization" of Liability and Free Speech Concerns
A counter-argument raised is that imposing strict liability could lead to the "weaponization" of defamation laws to stifle speech, particularly concerning political discourse.
- "koolba" expresses concern about a litigious society: "Iβm for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized."
- "gruez" warns about political shifts: "It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation"."
- "Newlaptop" echoes this concern about power: "The weapons will be used by the people in power."
The Erosion of Trust and the Purpose of Search
Several comments reflect a growing distrust in search engines and the broader practice of information dissemination, especially with the integration of AI.
- "lioeters" criticizes Google's product decisions: "It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever."
- "bsenftner" laments the impact on integrity: "The weaponization of "AI mistakes" - oops, don't take that seriously, everyone knows AI makes mistakes. Okay, yeah, it's a 24 pt headline with incorrect information, it's okay because it's AI. Integrity is dead. Reliable journalism is dead."
- "zozbot234" broadly characterizes AI output: "AI makes stuff up, film at 11. It's literally a language model. It's just guessing what word follows another in a text, that's all it does."
The Business Imperative and the "Enshittification" of Search
The discussion also touches upon the business motivations driving the aggressive deployment of AI in search, suggesting that commercial interests are prioritized over accuracy or user trust.
- "frozenlettuce" posits: "The model that google is using to handle requests in their search page is probably dumber than the other ones for cost savings."
- "Handprint4469" offers a stark assessment of Google's priorities: "no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification."
- "lioeters" adds: "But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore."
The Nature of AI as a Product vs. a Tool
Some feedback centers on how AI is being framed and sold, with a distinction drawn between tools that augment information and products that present definitive, albeit often incorrect, answers.
- "mindslight" contrasts older approaches with current ones: "Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product... LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts."
- "margalabargala" reiterates the presentation point: "When you build a system that purports to give answers to real world questions, then you're responsible for the answers given."