Essential insights from Hacker News discussions

The Four Fallacies of Modern AI

This Hacker News discussion revolves around the current state of Artificial Intelligence, particularly Large Language Models (LLMs), and the challenges in defining and achieving true artificial general intelligence (AGI). The conversation touches upon the limitations of current AI, the nature of intelligence itself, and the use of human-centric language to describe AI capabilities.

The Elusive Goal of True Self-Driving Cars

A significant portion of the discussion acknowledges that while companies like Waymo and Tesla have made strides in autonomous driving, a fully self-driving car that can handle all conditions remains a distant goal. The core challenge lies in the "long tail" of edge cases and unpredictable situations that humans navigate with ease.

  • "There’s a big difference being able to navigate the 80% of everyday driving situations and doing the 20% most people manage just fine but cars struggle with." (belZaah)
  • "There’s a road in these parts: narrow, twisty in three dimensions, unmarked, trees close to the road. Gets jolly slippery in the winter. I can drive that road in the middle of the night in sleet. Can an autonomous car?" (belZaah)
  • "Waymo doesn’t drive on highways and needs huge break in periods to even expand its boundaries in cities it’s already operating in." (kortilla)
  • One user recounted a personal negative experience: "I let them know today — when i laid on my horn while passing a Waymo stopped at a green light blocking the left turn lane — with its right blinker on." (joshribakoff)
  • Another user shared a more serious grievance: "Re: Tesla, this company paid me nearly $250,000 under multiple lemon law claims for their “self driving” software issues i identified that affected safety." (joshribakoff)

The Disconnect Between Current AI and Human Cognition

Several users express skepticism about whether current AI, especially LLMs, truly "understands" or possesses genuine intelligence, as opposed to sophisticated pattern matching. This leads to discussions about embodiment, common sense, and the potential for AI to replicate human-like experiences and motivations.

  • "I think the characterization in the article is fair, “self driving” is not quite there yet." (joshribakoff)
  • "I need to ask because I'm curious, are you using em-dashes ironically, habitually from the Before Times, or did you run your comment through chatgpt first? Or have I been brainwashed into emdash == AI always?" (Cthulhu_)
  • "I've always had the feeling that AI researchers want to build their own human without having to change diapers being part of the process. Just skip to adulthood please, and learn to drive a car without having experience in bumping into things and hurting yourself." (theturtlemoves)
  • "The challenge isn't to abandon our powerful alchemy in search of a pure science of intelligence." (degamad quoting Melanie Mitchell) This led to a counterpoint: "But alchemy was wrong and chasing after the illusions created by the frauds who promoted alchemy held back the advancement of science for a long time. We absolutely should have abandoned alchemy as soon as we saw that it didn't work, and moved to figuring out the science of what worked." (degamad)
  • "This reminds me Douglas Hofstadter, of the GĂśdel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind..." (shubhamjain)
  • "This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing." (entropyneur)
  • "AIPedant: "Making predictions about the world" is a reductive and childish way to describe intelligence in humans.Did David Lynch make Mulholland Drive because he predicted it would be a good movie? The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI." (AIPedant)
  • "It has absolutely nothing to do with reasoning, and I don't understand how anyone could think it's"close enough". Reasoning models are simply answering the same question twice with a different system prompt. It's a normal LLM with an extra technical step. Nothing else." (iLoveOncall)
  • "I would add a fifth fallacy: assuming what we humans do can be reduced to “intelligence”. We are actually very irrational. Humans are driven strongly by Will, Desire, Love, Faith, and many other irrational traits. Has an LLM ever demonstrated irrational love? Or sexual desire? How can it possibly do what humans do without these?" (myflash13)
  • "For all its advanced capabilities, the LLM remains a glorified natural language interface. It is exceptionally good at conversational communication and synthesizing existing knowledge, making information more accessible and in some cases, easier to interact with. However, many of the more ambitious applications, such as so-called "agents," are not a sign of nascent intelligence. They are simply sophisticated workflows—complex combinations of Python scripts and chained API calls that leverage the LLM as a sub-routine. These systems are clever, but they are not a leap towards true artificial agency. We must be cautious not to confuse a powerful statistical tool with the dawn of genuine machine consciousness." (alwinaugustin)

The Role of Language in Shaping Reality vs. Describing It

A philosophical tangent emerged questioning the statement "Language doesn't just describe reality; it creates it." This sparked a debate on whether language is merely descriptive or actively constructs our understanding of the world, and how this applies to AI.

  • "I wonder if this is a statement from the discussed paper or from the blog author. Haven't found the original paper yet, but this blog post very much makes me want to read it." (theturtlemoves)
  • "I never under stand these kinds of statements. Does the sun not exist until we have a word for it, did "under the rock" not exist for dinosaurs?" (ta20240528)
  • "There are some folks (like Donald Hoffman) that believe that consciousness is what creates reality. He believes consciousness is the base layer of reality and then we make up physical reality." (rolisz)
  • "The sun can mean different things to different people. We usually think of it as the physical star, but for some ancient civilizations it may have been seen as a person or a god. Living with these different representations can, in a very real way, shape the reality around you. If you did not have a word for freedom, would as many desire it?" (cpa)
  • "I am not sure how your sun example relates. Language is not whole of reality, but it is clearly part of reality. Memory engram of Coca-Cola is encoded in billions of human brains all over the world, and they are arrangement of atoms." (sanxiyn)
  • "I think create is the wrong word choice here. Shaping reality is a better one, as it doesn't hold the implication that before language, nothing existed." (keiferski)
  • "Where it really gets interesting, IMO, is when these divisions (which originally were mostly just linguistic categories) start shaping what's actually in the world. The concept of property is a good example. Originally it's just a legal term, but over time, it ends up reshaping the actual face of the earth, ecosystems, wars, migrations, on and on." (keiferski)

Debate on AI's Strengths: Computation vs. Understanding

The "Bitter Lesson" essay by Rich Sutton, which emphasizes the success of general methods leveraging massive-scale computation over attempts to build in human-like cognitive structures, was brought up. This led to a discussion about whether brute-force computation is the path to AGI or if deeper understanding and human-like learning processes are necessary.

  • "Everyone always something won’t work until it does. That’s not that interesting." (renewiltord)
  • "But it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips." (shubhamjain)
  • "Brute force will always be part of the story, but it's not the solution. It just allows us to take an already working solution and make it better." (CuriouslyC)
  • "I think the Stochastic Parrots idea is pretty outdated and incorrect. LLMs are not parrots, we don't even need them to parrot, we already have perfect copying machines. LLMs are working on new things, that is their purpose, reproducing the same thing we already have is not worth it." (visarga)
  • "LLMs are more like pianos than parrots, or better yet, like another musician jamming together with you, creating something together that none would do individually." (visarga)
  • "We are not individually intelligent, it is a social, environment based process, not a pure-brain process." (visarga)
  • "The fact that it can copy smartly exactly ONE of the information in a given prompt (which is a complex sentence only humans could process before) and not others is absolutely a progress in computer science, and very useful. I’m still amazed by that everyday, I never thought I’d see an algorithm like that in my lifetime." (ttoinou)

The Nature and Definition of Intelligence

A recurring theme is the difficulty and subjectivity in defining intelligence. Users grapple with whether intelligence is about prediction, reasoning, consciousness, or a combination of human-like traits. There's also a concern that the field might be "defining intelligence downwards" to fit current AI capabilities.

  • "Are swimming and sailing the same, because they both have the result of moving through the water? I'd say, no, they aren't, and there is value in understanding the different processes (and labeling them as such), even if they have outputs that look similar/identical." (keiferski)
  • "It has absolutely nothing to do with reasoning, and I don't understand how anyone could think it's"close enough"." (iLoveOncall)
  • "AIPedant: "Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie? The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI." (AIPedant)
  • "The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI." (AIPedant)
  • "I look at it the complete opposite way: humans are defining intelligence upwards to make sure they can perceive themselves better than a computer." (koonsolo)
  • "The social aspect also plays a big role, parallelizing exploration and streamlining exploitation of discoveries. We are not individually intelligent, it is a social, environment based process, not a pure-brain process." (visarga)
  • "It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group." (MrScruff)
  • "It matters if your civilizational system is built on assigning rights or responsibilities to things because they have consciousness or "interiority." Intelligence fits here just as well." (keiferski)
  • "Is embodiment a requirement to hold identity and is identity a pre-requisite for intelligence?" (retrocog)
  • "How would you define intelligence? Surely not by the ability to make a critically acclaimed movie, right?" (pu_pe)
  • "I don't get the problem with this really. I think LLM's "reasoning" is a very fair and proper way to call it. It takes time and spits out tokens that it recursively uses to get a much better output than it otherwise would have. Is it actually really reasoning using a brain like a human would? No. But it is close enough so I don't see the problem calling it "reasoning". What's the fuss about?" (simianwords)
  • "I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish." (AIPedant)
  • "They don't need to reach equal human intelligence, the just need to reach an acceptable of intelligence so corporation can reduce labor cost. sure it bad at certain things but you know what ??? most of real world job didn't need a genius either" (tonyhart7)

The Pragmatic View: AI as a Tool for Labor Cost Reduction

A more grounded perspective is offered by those who see AI not as a path to sentience or true intelligence, but as a highly effective tool for automating tasks and reducing labor costs, regardless of whether it "understands" in a human sense.

  • "They don't need to reach equal human intelligence, the just need to reach an acceptable of intelligence so corporation can reduce labor cost. sure it bad at certain things but you know what ??? most of real world job didn't need a genius either" (tonyhart7)