Essential insights from Hacker News discussions

Are OpenAI and Anthropic losing money on inference?

Here's a summary of the themes found in the discussion:

The Viability of AI Business Models and Profitability

A central theme of the discussion revolves around whether AI companies like OpenAI and Anthropic are truly profitable, particularly concerning the cost of "inference" (the computational cost of running their models to generate responses). There's a strong debate about what constitutes "unit economics" and whether to include massive upfront training and R&D costs in these calculations. Some argue that focusing solely on inference costs, excluding training, presents a misleadingly optimistic picture.

  • "These articles (of which there are many) all make the same basic accounting mistakes. You have to include all the costs associated with the model, not just inference compute." - JCM9
  • "The issue behind the growing concerns about a giant AI bubble is if that margin is sufficient to cover the costs of everything else." - JCM9
  • "Unit economics needs to include the cost of the thing being sold, not just the direct cost of selling it." - JCM9
  • "The model is what’s being sold. You can’t just sell “inference” as a thing with no model." - JCM9
  • "The article is answering a specific question, and has excluded this on purpose. If you have a sunk training cost you still want to know if you can at least operate profitably." - philipallstar
  • "The article is about unit economics and marginal costs of inferences and this comment thread is trying to criticize the article based on a misunderstanding of what unit economics means." - Aurornis
  • "The cost of “manufacturing” an AI response is the inference cost, which this article covers." - Aurornis
  • "Amortizing the training per-query really doesn't meaningfully change the unit economics." - jsnell
  • "Fact remains when all costs are considered these companies are losing money and so long as the lifespan of a model is limited it’s going to stay ugly." - JCM9

Monetization Strategies and Consumer Impact

The discussion touches on how AI companies might monetize their services and the potential impact on consumers. Advertising within chatbot outputs is a frequently mentioned possibility, though some users express skepticism about user acceptance. The potential for increased API prices and more aggressive rate limits is also noted as a concern for users and developers relying on these services.

  • "Yeah but they can probably monetize them with ads." - martinald
  • "Inserting ads into chatbot output is like inserting ads into email. People are more reluctant to tolerate that than web or YouTube ads (which are hated already)." - bgwalter
  • "If they insert stealth ads, then after the third sponsored bad restaurant suggestion people will stop using that feature, too." - bgwalter
  • "API prices are going up and rate limits are getting more aggressive (see what's going on with cursor and claude code)" - kelp6063
  • "Costs will go up to levels where people will no longer find this stuff as useful/interesting. It’s all fun and games until the subsides end." - JCM9
  • "Billions of people use Google, YouTube, Facebook, Tiktok, Instagram, etc and accept the ads. Getting similar ad rates would make OpenAI fabulously profitable." - jsnell
  • "If they start showing ads based on your prompts, and your history of "chats", it will erode the already shaky trust that users have in the bots." - efficax

The Role and Cost of Training and R&D

The immense cost and necessity of continuous model training and R&D are highlighted as a major factor in the financial health of AI companies. Some argue that without this investment, companies risk falling behind rapidly evolving competitors. The "arms race" in developing increasingly capable models is seen as a driver of unsustainable costs by some, while others suggest that models can become powerful enough without constant, massive retraining.

  • "The issue behind the growing concerns about a giant AI bubble is if that margin is sufficient to cover the costs of everything else." - JCM9
  • "Using that apartment building analogy it’s like having to knock down and rebuild the building every 6 months to stay relevant..." - JCM9
  • "It’s all fun and games until the subsides end." - JCM9
  • "The cost of training is not a factor in the marginal cost of each inference or each new customer." - Aurornis
  • "The cost of training is not a factor in the marginal cost of each inference or each new customer." - Aurornis
  • "But they don't have to be retained frequently at great cost. Right now they are retrained frequently because everyone is frequently coming out with new models and nobody wants to fall behind." - wongarsu
  • "The only companies that'll reliably print money off AI are TSMC and NVIDIA because they'll get paid either way." - churchill
  • "And if you ever stop/step off the treadmill and jack up prices to reach profitability, a new upstart without your sunk costs will immediately create a 99% solution and start competing with you." - churchill
  • "The LLM scene is an insane economic bloodbath right now. The tech aside, the financial moves here are historical. It's the ultimate wet dream for consumers - many competitors, face-ripping cap-ex, any missteps being quickly punished, and a total inability to hold back anything from the market." - Workaccount2
  • "If we didn't pay for training, we'd be a very profitable company." - Sam Altman (quoted)
  • "Also admitting it would make this business impossible if they had to respect copyright law, so the laws shall be adjusted so that it can be a business." - hirako2000
  • "it’s comical that something like this was even uttered in the conversation. It really shows how disconnected the tech sector is from the real world. Imagine Intel CEO saying 'If we didn't have to pay for fabs, we'd be a very profitable company.'" - metalliqaz
  • "Since inference costs have been plummeting, plans have had tweaked quotas, and usage patterns can change." - JimDabell (referring to changing statements from Sam Altman)
  • "The real expense is salary for talent." - NoahZuniga

The Debate on Unit Economics Definition and Application

A significant portion of the thread is dedicated to clarifying and debating the definition of "unit economics." Some argue that it strictly refers to the incremental costs per customer or transaction, excluding fixed costs like R&D and training. Others contend that a realistic business assessment must include the cost of the "product" itself, which in this case is the model. This definitional dispute underlies much of the disagreement about AI company profitability.

  • "Unit economics is about the incremental value and costs of each additional customer. You do not amortize the cost of software into the unit economics calculations. You only include the incremental costs of additional customers." - Aurornis
  • "That isn't what unit economics is. The purpose of unit economics is to answer: 'How much money do I make (or lose) if I add one more customer or transaction?' Since adding an additional user/transaction doesn't increase the cost of training the models you would not include the cost of training the models in a unit economics analysis. The entire point of unit economics is that it excludes such 'fixed costs'." - voxic11
  • "The parent commenter’s responses are all based on a wrong understanding of what unit economics means. You don’t include fixed costs in the unit economics. Unit economics is about incremental costs." - Aurornis
  • "The cost of training is not a factor in the marginal cost of each inference or each new customer." - Aurornis
  • "The article is about unit economics and marginal costs of inferences and this comment thread is trying to criticize the article based on a misunderstanding of what unit economics means." - Aurornis
  • "I think the nuance here is what people consider the “cost” of “inference.” Purely on compute costs and not accounting for the cost of the model (which is where the article focuses) it’s not bad." - JCM9

The Landscape of AI Competition and Innovation

Commenters discuss the competitive landscape, the role of open-source models, and the rapid pace of innovation. The potential for startups to emerge and challenge established players is noted, as is the difficulty of building a sustainable business solely on commoditized open-source models. The talent churn within major AI labs and the emergence of new competitors founded by leading researchers are also seen as significant factors.

  • "Self hosting LLMs isn’t completely out of the realm of feasibility." - chasd00
  • "But what about running Deepseek R1 or (insert other open weights model here)? There is no training cost for that." - martinald
  • "“Open source” is great but then it’s just a commodity. It would be very hard to build a sustainable business purely on the back of commoditized models." - JCM9
  • "There is plenty of money to be made from hosting open source software." - scarface_74
  • "Like we've seen with Karparthy & Murati starting their own labs, it's to be expected that over the next 5 years, hundreds of engineers & researchers at the bleeding edge will quit and start competing products." - churchill
  • "This talent diffusion guarantees that OpenAI and Anthropic will have to keep sinking in ever more money to stay at the bleeding edge, or upstarts like DeepSeek and incumbents like Meta will simply outspend you/hire away all the Tier 1 talent to upstage you." - churchill
  • "The models are powerful as they are, most of the knowledge in them isn't going to rapidly obsolete..." - wongarsu

Trust and Transparency in CEO Statements

A recurring theme is the skepticism towards statements made by AI company CEOs, particularly Sam Altman. His past actions and changing statements regarding OpenAI's profitability have led many commenters to distrust his pronouncements on the company's financial health. The lack of public financial transparency for private companies further fuels this skepticism.

  • "So at least for OpenAI, the answer is “no” [profitable if you exclude training]. They did say it was close." - lkjdsklf (referring to a past statement attributed to Sam Altman)
  • "You have to figure out what exactly they are losing money on... Is that inference or new R&D? ... Except the startup isn't purely financed by VC etc, but also by a profitable inference company." - rich_sasha
  • "I think the nuance here is what people consider the “cost” of “inference.” Purely on compute costs and not accounting for the cost of the model (which is where the article focuses) it’s not bad." - JCM9
  • "Any missteps being quickly punished, and a total inability to hold back anything from the market." - Workaccount2
  • "It’s wild and, while they're all guilty, Gemini is a particularly egregious offender. What really surprises me is that they don't even consider it a bug if you can predictably get it to generate copyrighted content." - ethagnawl
  • "He makes money by convincing people to buy OpenAI stock. If OpenAI goes down tomorrow, he will be fine. His incentive is to sell the stock, not actually build and run a profitable business." - achenet
  • "Anyone paying attention should have zero trust in what Sam Altman says." - chairhairair
  • "It's wild and, while they're all guilty, Gemini is a particularly egregious offender." - ethagnawl
  • "With the heat turning up on AI companies to explain how they will land on a viable business model some of this is starting to look like WeWork’s “Community Adjusted EBITA” arguments..." - JCM9