Skepticism Towards AI Timeline Predictions
A significant theme revolves around skepticism about the accuracy and usefulness of predicting specific timelines for Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI). Many argue that current models are overly simplistic and fail to account for unforeseen obstacles or plateaus in AI development.
- Oversimplification and Extrapolation: Several users criticize the reliance on extrapolating current trends without considering potential roadblocks. "These predictions seem wildly reductive in any case and it seems like extrapolating AI's ability to complete task that would take a human 30 seconds -> 10 minutes is far different than going from 10 minutes to 5 years," says
lubujackson
, highlighting the complexity overlooked by simple "graphs go up" predictions. - "Shoddy Toy Models": Concerns are raised about the use of simplistic models treated as rigorous research.
LegionMammal978
quotes the author of the critique: "I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral." They further emphasize that these models shouldn't be the basis for significant life decisions. - Questionable Assumptions:
sweezyjeezy
notes concerning assumptions "that a) current trends will continue for the foreseeable future, b) that 'superhuman coding' is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress." - Error Rates and Complexity:
echelon
points out a crucial factor often ignored: "I'm also interested in error rates multiplying for simple tasks. A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?" - Analogy to Past Over-Optimism:
boznz
draws a parallel to overly optimistic predictions about fusion energy in the past, stating that "the 'science' with moving from AGI to ASI is not really that solid yet we have yet to achieve 'AI ignition' even in the lab." - Marketing Hype:
staunton
dismisses it as "marketing fluff packaged as some kind of prediction," and considers engaging with it seriously to be unhelpful. This sentiment is echoed byed
, who references the EPIC 2014 video as a reminder of how future predictions can age poorly. - Reductive Predictions:
mlsu
uses a satirical analogy involving a growing niece and her projected weight to emphasize the absurdity of extrapolating growth trends indefinitely.
Debate on AI Risk and its Predictability
Another significant theme centers on the debate surrounding the risks associated with advanced AI and whether these risks can be accurately predicted.
- The Focus on Specifics Obscures Larger Risks:
kypro
, who identifies as being in the "P(doom) > 90% category," argues that overly specific predictions distract from the core risks: "Making predictions that are too specific just opens you up to pushback from people who are more interested in critiquing the exact details of your softer predictions (such as those around timelines) rather than your hard predictions about likely outcomes." - Known Unknowns:
kypro
also discusses the unknowable nature of timelines due to the unpredictable path of ASI, and further states, "The fact we are rapidly developing a technology which most people would accept comes with at least some existential risk, that we can't predict the progress curve of, and where solutions would come with significant coordination problems should concern people." - Impacts on Life Decisions: The potential existence of AGI leads some to question life planning decisions.
lava_pidgeon
asks, "Do I need save for pensions? Does it make to sense to start family?"
The Role of Perspective and the "S-Curve"
Several comments address the importance of perspective when evaluating AI predictions and the possibility of AI development following an S-curve pattern.
- Impact of Recent Advances:
XorNot
suggests that recent progress, particularly the rise of ChatGPT, has significantly influenced bullish AI predictions. "Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now." - Potential Plateau:
sweezyjeezy
notes that any good model should "put zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau." - S-Curve Possibility:
TimPC
asserts that the author suggests "it's possible we are on a s-curve that levels out before superhuman intelligence."
The Motivations Behind AI Predictions
Some commenters examine the motivations behind those making AI predictions, suggesting that factors other than pure scientific analysis might be at play.
- Ego and Influence:
kypro
laments that some individuals with influence are "stroking their own own egos with future predictions, which even if I happen agree with do next to nothing to improve the distribution of outcomes." - Gambling and Payouts:
ysofunny
raises the question of whether betting on AGI timelines is a zero-sum game and questions the stakes involved: "what do I get if I am correct? how should the incorrect lose?" - Overconfidence and Groupthink:
habinero
suggests that "people who think they're too smart to fall prey to groupthink and bias confirmation, and yet predictably are falling prey to groupthink and bias confirmation."
Philosophical Underpinnings
The discussion touches on the philosophical questions underpinning AI development and predictions.
- General Intelligence:
evilantnie
points out that "the crux is a pretty straightforward philosophical question 'what does it even mean to generalize intelligence and agency', how much can scaling laws tell us about that?" - Ignoring the Asteroid:
vonneumannstan
uses a metaphor about dinosaurs debating the nature of an asteroid to accuse some of being out of touch with the real and imminent danger.