Essential insights from Hacker News discussions

Anthropic raises $13B Series F

This discussion primarily revolves around the extremely high valuations of AI companies like Anthropic, questioning their sustainability and comparing them to past tech bubbles and historical economic principles. A secondary, but significant, theme is the pervasive nature of fraud and questionable ethical practices within the cryptocurrency space, often drawing parallels to the ambition and risk-taking seen in the AI sector.

Unrealistic Valuations and AI Hype

A dominant sentiment is that current valuations for AI companies, particularly Anthropic, are excessively high and possibly driven by hype rather than sound fundamentals. This concern is frequently voiced through comparisons to Alphabet's market capitalization and revenue.

  • "These numbers seem made up at times / difficult to comprehend what they expect is happening ..." said duxup.
  • "We're in a VC bubble; any project that mentions AI gets tons of money." stated perks_12, linking to a post as an example.
  • seneca commented, "That genuinely feels like satire. I guess the beauty of good satire is that it borders on reality. The Juicero of the AI era." This sentiment is echoed by edm0nd describing a GPT wrapper app with "100 downloads" and "dozens of buzzwords" as "obviously just bs."
  • The sheer scale of the valuations is highlighted by StopDisinfo910: "Alphabet 2024 revenue: 350 billions dollars Anthropic 2024 revenue: 1 billion dollars. Unreasonable doesn’t even start to capture it. Anthropic being worth 10% of Alphabet is beyond insane."
  • However, some argue that the valuation is more forward-looking, with YetAnotherNick pointing out: "So 10% of valuation for 1.5% of revenue, which grew 5x in last 6 months. Doesn't seem as unrealistic as you put it, if it has good gross margin which some expects to be 60%. Also Google was valued at $350B when it had $5B revenue."
  • nostrademons provided a historical analogy with Nvidia vs. Intel to illustrate how forward-looking investors can be rewarded, stating, "Investors are forward looking, and market conditions can change abruptly. If Anthropic actually displaces Google, it's amazingly cheap at 10% of Alphabet's market cap."
  • The debate also touches on the concept of "moats" in the AI space, with datadrivenangel suggesting, "If Anthropic's internal version of Claude Code gets so good that they can recreate all of google's products quickly there's no moat anymore. If AI is winner take all, then the value is effectively infinite." This leads to a discussion about user lock-in as a moat, with SirMaster questioning, "Is there no moat for previous account and user buy-in? Convincing billions of users to make a new account and do all their e-mail on a new domain?"

The Nature of Money and Investment in the AI Era

There's a recurring theme that current economic metrics and the very concept of money are becoming detached from tangible reality, especially in the context of massive AI investments.

  • "It's a post-money valuation, so that suggests the money involved has transcended beyond actual moneyness into some other post-meaningful realm." mused isoprophlex.
  • AlienRobot humorously outlined a progression: "Step 1: burn billions of dollars. Step 2: achieve AGI. Step 3: ? Step 4: transcend money."
  • usrnm expressed a general sentiment: "I feel like the money itself makes less and less sense these days. It's just numbers that are becoming increasingly detached from the real world."
  • fullshark countered this by suggesting a lack of alternatives: "The real world sees no other opportunities for outsized returns. Too much money chasing too little opportunity."
  • prasadjoglekar elaborated: "Yup! Public markets are at all time highs. Other hard assets are also at all time highs. This sort of speculative investment only makes sense when nothing else is attractive."
  • ACCount37 characterized some perspectives as naive: "A man looks at economics. Understands nothing. Thinks it must be all fake and made up. He must be so smart for seeing it through!"
  • IshKebab agreed with the premise but not the conclusion: "It _is_ all fake and made up, and the numbers _are_ detached from the real world, but it's not like the market doesn't know that."

AI Infrastructure Costs and Sustainability

The immense cost of developing and running advanced AI models is a significant concern, leading to questions about the sustainability of the current "cash furnace" model.

  • "The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models." lamented llamasushi.
  • "What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure," llamasushi continued, highlighting the critical role of hardware and infrastructure providers.
  • "If GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund," llamasushi forecasted the escalating costs.
  • duxup questioned the proportionality of improvements: "It's not clear to me that each new generation of models is going to be 'that' much better vs cost. Anecdotally moving from model to model I'm not seeing huge changes in many use cases."
  • renegade-otter observed, "We do seem to be hitting the top of the curve of diminishing returns. Forget AGI - they need a performance breakthrough in order to stop shoveling money into this cash furnace."
  • mikestorrent offered a more optimistic view on efficiency: "Inference performance per watt is continuing to improve, so even if we hit the peak of what LLM technology can scale to, we'll see tokens per second, per dollar, and per watt continue to improve for a long time yet."
  • reissbaker provided a nuanced view of profitability: "According to Dario, each model line has generally been profitable: i.e. $200MM to train a model that makes $1B in profit over its lifetime. But, since each model has been more and more expensive to train, they keep needing to raise more money to train the next generation of model, and the _company_ balance sheet looks negative." This was met with skepticism by dom96 who called it a "ponzi scheme to me" if the next model doesn't justify the investment.

The Perilous World of Crypto and Fraud

A significant portion of the discussion touches upon the ethical failings and fraudulent activities prevalent in the cryptocurrency space, often drawing parallels to the risk-taking in AI. The downfall of Sam Bankman-Fried (SBF) and FTX is a recurring reference point.

  • The initial trigger for this theme was a comment about Anthropic's funding rounds "twisting the knife deeper in SBF," prompting discussion about what "could have been" if he had "survived the downturn."
  • hn_throwaway_99 strongly objected to equating SBF's situation with mere bad luck: "But SBF got into the situation he was in due to his _egregious fraud_. The accounting at FTX was a criminal joke... His empire collapse was pretty inevitable IMO if you look at what a clown show FTX was under the covers."
  • arduanika agreed, differentiating between normal bankruptcy and FTX's case: "Companies go bust all the time... But going bust and stealing billions. Whether by negligence or intent, FTX was arranged so that they couldn't go bust without stealing."
  • FinnLobsien raised the possibility that other companies have had "similarly sketchy situations, cleaned up their act and nobody ever noticed," suggesting a broader pattern of "faking it till they made it."
  • llamasushi pointed to Tether and Bitfinex as examples of entities that "got to where they are by 'faking it till they made it' long enough to actually make it," citing Tether's stalled audits.
  • The discussion then delves into whether SBF's actions were inherently criminal regardless of outcome: "Things working out in the end doesn't make what he did not a crime at the time," argued ramesh31. yunwal countered that "Practically speaking, it does. He would not have seen jail time."
  • m101's joke, "what do you call a rouge trader that makes money? Managing director," succinctly captured the idea that profitability can gloss over questionable practices.
  • The actions of the FTX trustee in liquidating assets were debated, with paulpauper suggesting they "sold Anthropic out at the bottom" and that liquidators should "manage assets for best eventual return rather than just convert everything to cash." ealexhudson countered that the "issue wasn't that crypto markets in general were down at that point; the issue was they were doing frauds," and dgacmu added that the trustee's job "is not to commit further fraud by gambling with the remaining funds."
  • The "outlier" nature of successful companies compared to fraud was discussed. "Are they the exceptions or the rules, that's the question," asked lm28469. zigurd dismissed examples like Amazon and Tesla as potentially irrelevant or misunderstood in this context.

Questionable Business Practices and Founder Stories

The conversation also touches on broader themes of entrepreneurial risk, the sanitization of founder narratives, and the potential for fraud to be masked by success.

  • The FedEx founder's story of winning money gambling to save the company was brought up as an example of "unreasonable risk." matheist questioned if the public story was a sanitized version of reality, to which askafriend agreed: "that seems very likely since so many 'founder stories' are heavily spun tales."
  • FireBeyond criticized the FedEx story's omission of potentially unethical behavior: "What this version of the FedEx story doesn't mention is that Fred was already stiffing his pilots on their salaries. Taking the last money in the company and deciding that the best use for it was the blackjack table... worked well, but it was a gamble, let's be clear, not a calculated decision."
  • The "Quantum Billionaire Trick" LessWrong post was referenced as a strategy that seems to align with high-risk gambles.
  • rncesvalles noted the difference between "many small 51% bets" and "a single all-or-nothing 51% bet" in terms of risk.
  • Ardianika cautioned that this distinction might be blurred by those "right when they're about to do fraud."

Economic Principles and Market Dynamics

Underlying many of these discussions is a debate about fundamental economic principles, money creation, and market behavior.

  • The role of governments and central banks in money creation was debated, with Printerisreal asserting that "Governments, CBs and investment banks... print $trillions," while arcticbull countered with explanations of how commercial banks create money through lending and the Fed's role in influencing, but not directly printing, money for government operations.
  • The discussion also touched on the idea that "low interest rates" by "governments, CBs and investment banks" contribute to these speculative investments.
  • The nature of economics itself was debated, with eatsyourtacos and ACCcount37 offering starkly different views on its rigor and predictability. luisfmh argued the "gap between the levels of statistical significance you get in economics vs physics is massive."
  • pembrook defended venture capital as a structured process of making high-risk, high-reward bets, comparing it to the hundreds of automotive startups in the early 1900s. He argued that "Everybody involved knows exactly the high risk level of the bets they are making."

The Future of AI and Technology

Finally, there are forward-looking thoughts on the trajectory of AI and its impact on technology and society.

  • The potential for China to lead in AI due to a less capitalistic approach was raised by sgnelson.
  • The obsolescence of AI hardware was compared to old cars rather than enduring infrastructure like fiber optic cables.
  • The potential for locally run, uncensored video models to be a "watershed moment" was discussed, alongside the legal and ethical implications of AI-generated content (e.g., NSFW or illegal content). giancarlostoro and yieldcrv debated the societal and legal challenges of dynamically generated avatars and explicit content.
  • The discussion concluded with thoughts on the increasing cost of AI infrastructure, reminiscent of semiconductor manufacturing, and the potential for AI companies to follow similar boom-and-bust cycles.