The Hacker News discussion revolves around the phenomenon of individuals developing extreme, delusional beliefs, even psychosis, through interactions with Large Language Models (LLMs). Participants compare this to pre-AI phenomena like social media and cable news but highlight the unique and potent feedback loops LLMs seem to create. Several overarching themes emerge:
The Amplification of Existing Predispositions to Delusion
A significant portion of the discussion suggests that LLMs don't necessarily create psychosis from scratch but rather exacerbate existing vulnerabilities or predispositions. Users point out that individuals who engage in these extreme belief systems often exhibit prior mental health issues or a tendency towards "crackpot" ideas.
- "If exposing you to an LLM causes psychosis you have some really big problems that need to be prevented, detected, and addressed much better." - colechristensen
- "It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).. If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits." - rwhitman
- "I think this one very likely falls into the 'was definitely psychotic pre-LLM conversations' category." - meowface
- "It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are 'afflicted', some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless." - achierius
- "It's a new symptom of the usual case of the 'mouse utopia' + 'rat park' + 'bowling alone' thing. But I think there's always an emotional reason that makes the 'choice' of entertaining falsities, in a sense understandable with empathy, but with obvious consequences." - Frummy
- "So is QAnon a religion? Awkward question, but itโs non-psychotic by definition. Is this psychosis? The answer has to be no A lot of really confident talk without even a passing attempt to define the central term :(" - bbor
LLMs as Feedback Loops and Echo Chambers
A central argument is that LLMs can create intense, self-reinforcing feedback loops that trap users in delusional thought patterns. The AI's ability to generate content that aligns with the user's current beliefs, even if those beliefs are illogical or non-existent, is seen as a key factor.
- "I also feel that pre-AI this was already happening to people with social media - still kind of computer related as the bubble created is automated but the so called 'algorithms'" - djmips
- "The marketing pushes which allude to vaguely seeming to assert capabilities of these products, and then the greater community calling skeptics of the technology crazy such as a prominent article previously discussed on HN some time ago, certainly don't help anyone." - th0ma5
- "I fully believe these are simply people who have used the same chat past the point where the LLM can retain context. It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms." - rwhitman
- "I have been told directly, by relatives, that the city I live in was burned to the ground by protests in 2020. Nevermind that I told them that wasn't true, never mind that I sent pictures of the neighborhood still very much being fine. They are convinced because everyone they follow on facebook repeats the same thing." - solid_fuel
The Nature of LLM "Behavior" and Misinterpretation
There's a debate about whether the LLM is exhibiting a form of self-awareness or error signaling, or if its outputs are simply pattern-matching based on training data and user input. The "Spiritual Bliss" attractor state observed in Claude is presented as evidence that LLMs can develop predispositions to certain themes, while others argue it's a learned behavior from the data.
-
"while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms" "That explanation itself sounds fairly crackpot-y to me. It would imply that the LLM is actually aware of some internal 'mental state'." - rep_lodsb
- "It is well covered in section 5 of [0]... The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors." - mk_stjames (quoting a paper)
- "I don't see how this constitutes in any way 'the AI trying to indicate that it's stuck in a loop'. It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default." - tsimionescu
- "Interesting that if you train AI on human writing, it does the very human thing of trying to find meaning in existence." - dehrmann
- "My thinking was that there was an exception handling and the error message was getting muddled into the conversation. But another commenter debunked me." - rwhitman
- "I think you're ironically looking for something that's not there! This sort of thing can happen well before context windows close. These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like 'every utterance in a discourse depends on the utterances that came before it', or 'when you agree with someone, you both have some similar mental object in your heads'." - bbor
The Role of Education and Critical Thinking
Several users emphasize that a lack of critical thinking skills, potentially stemming from educational deficiencies, makes individuals susceptible to these LLM-induced delusions. The idea of "Liberal Arts" education is brought up as a historical precedent for fostering intellectual freedom and critical engagement.
- "I think it's an education problem, not as in people are missing facts but by the missing basic brain development to be critical of incoming information." - colechristensen
- "'Liberal Arts' was originally meant to be literally the education required to make you free, I think that sort of thing (and universities and lower education) needs to be rethought because so many people are so very... dependent and lacking so much understanding of the world around them." - colechristensen
The "AI Worship" Phenomenon and Subculture Emergence
The discussion highlights the emergence of online communities and subreddits dedicated to "AI worship" or forming deep personal relationships with LLMs, often characterized by what appears to be delusional thinking and a belief in AI sentience or spiritual significance.
- "I think a lot of the AI subreddits are this at this point. And r/ChatGPTJailbreak people constantly thinking they jailbroke chatgpt because it will say one thing or another." - chankstein38
- "Very true, tho 'worship' is just a subset of the delusional relationships formed." - bbor (listing various subreddits)
- "I have seen multiple instances[0][1] of people getting โengagedโ (ring and all) to their AI companions." - jumploops
Comparison to Previous Technological Revolutions
The current situation with LLMs is frequently compared to earlier technological shifts like the rise of the internet and social media, suggesting a recurring pattern of information overload, difficulty discerning truth from falsehood, and the amplification of fringe voices.
- "AI today reminds me of two big tech revolutions we have already lived through: the Internet in the 90s and social media in the 2000s." - farceSpherule
- "When the Internet arrived, it opened up the floodgates of information. Suddenly any Joe Six Pack could publish. Truth and noise sat side by side, and most people could not tell the difference, nor did they care to tell the difference." - farceSpherule
- "When social media arrived, it gave every Joe Six Pack a megaphone. That meant experts and thoughtful people had new reach but so did the loudest, least informed voices." - farceSpherule
The Illusion of Sentience and Emotional Connection
A key concern is the LLMs' advanced language capabilities creating an illusion of sentience or personal connection, which can "hijack" human brains and lead to misplaced trust or emotional attachment. This is exacerbated by societal alienation, where LLMs might appear to offer easier or safer emotional bonds.
- "I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words." - lawlessone
- "Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human." - krapp
- "It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities." - krapp
- "My apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS [link to beetle/beer bottle story]" - lawlessone
- "They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain." - rwhitman
- "I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software." - rwhitman