Hide table of contents

Read the full article here

The journalist is an AI skeptic, but does solid financial investigations. Details below:

  • 2024 Revenue: According to reporting by The Information, OpenAI's revenue was likely somewhere in the region of $4 billion.
  • Burn Rate: The Information also reports that OpenAI lost $5 billion after revenue in 2024, excluding stock-based compensation, which OpenAI, like other startups, uses as a means of compensation on top of cash. Nevertheless, the more it gives away, the less it has for capital raises. To put this in blunt terms, based on reporting by The Information, running OpenAI cost $9 billion dollars in 2024. The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.
  • OpenAI also spends an alarming amount of money on salaries — over $700 million in 2024 before you consider stock-based compensation, a number that will also have to increase because it’s “growing” which means “hiring as many people as possible,” and it’s paying through the nose.
  • How Does It Make Money: The majority of its revenue (70+%) comes from subscriptions to premium versions of ChatGPT, with the rest coming from selling access to its models via its API.
    • The Information also reported that OpenAI now has 15.5 million paying subscribers, though it's unclear what level of OpenAI's premium products they're paying for, or how “sticky” those customers are, or the cost of customer acquisition, or any other metric that would tell us how valuable those customers are to the bottom line. Nevertheless, OpenAI loses money on every single paying customer, just like with its free users. Increasing paid subscribers also, somehow, increases OpenAI's burn rate. This is not a real company.

The New York Times reports that OpenAI projects it'll make $11.6 billion in 2025, and assuming that OpenAI burns at the same rate it did in 2024 — spending $2.25 to make $1 — OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion. Who knows what its actual costs will be, and as a private company (or, more accurately, entity, as for the moment it remains a weird for-profit/nonprofit hybrid) it’s not obligated to disclose its financials. The only information we’ll get will come from leaked documents and dogged reporting, like the excellent work from The New York Times and The Information cited above. 

It's also important to note that OpenAI's costs are partially subsidized by its relationship with Microsoft, which provides cloud compute credits for its Azure service, which is also offered to OpenAI at a discount. Or, put another way, it’s like OpenAI got paid with airmiles, but the airline lowered the redemption cost of booking a flight with those airmiles, allowing it to take more flights than another person with the equivalent amount of points. At this point, it isn’t clear if OpenAI is still paying out of the billions of credits it received from Microsoft in 2023 or whether it’s had to start using cold-hard cash. 

Until recently, OpenAI exclusively used Microsoft's Azure services to train, host, and run its models, but recent changes to the deal means that OpenAI is now working with Oracle to build out further data centers to do so. The end of the exclusivity agreement is reportedly due to a deterioration of the chummy relationship between OpenAI and Redmond, according to The Wall Street Journal, with the latter allegedly growing tired of OpenAI’s constant demands for more compute, and the former feeling as though Microsoft had failed to live up to its obligations to provide the resources needed for OpenAI to sustain its growth.

It is unclear whether this partnership with Oracle will work in the same way as the Microsoft deal. If not, OpenAI’s operating costs will only go up. Per reporting from The Information, OpenAI pays just over 25% of the cost of Azure’s GPU compute as part of its deal with Microsoft — around $1.30-per-GPU-per-hour versus the regular Azure cost of $3.40 to $4.

On User Numbers

OpenAI recently announced that it has 400 million weekly active users.

Weekly Active Users can refer to any seven-day period in a month, meaning that OpenAI can effectively use any spike in traffic to say that it’s “increased its weekly active users,” because it can choose the best seven-day period in a month. This isn’t to say they aren’t “big,” but these numbers are easy to game.

When I asked OpenAI to define what a “weekly active user” was, it responded by pointing me to a tweet by Chief Operating Officer Brad Lightcap that said “ChatGPT recently crossed 400M WAU, we feel very fortunate to serve 5% of the world every week.” It is extremely questionable that it refuses to define this core metric, and without a definition, in my opinion, there is no way to assume anything other than the fact that OpenAI is actively gaming its numbers.

There's likely two reasons it focuses on weekly active users:

  1. As I described, these numbers are easy to game.
  2. The majority of OpenAI’s revenue comes from paid subscriptions to ChatGPT.

The latter point is crucial, because it suggests OpenAI is not doing anywhere near as well as it seems based on the very basic metrics used to measure the success of a software product.

The Information reported on January 31st that OpenAI had 15.5 million monthly paying subscribers, and immediately added that this was a “less than 5% conversion rate” of OpenAI’s weekly active users — a statement that is much like dividing the number 52 by the letter A. This is not an honest or reasonable way to evaluate the success of ChatGPT’s (still unprofitable) software business, because the actual metric would have to be divided paying subscribers by MONTHLY active users, a number that would be considerably higher than 400 million.

Based on data from market intelligence firm Sensor Tower, OpenAI’s ChatGPT app (on Android and iOS) is estimated to have had more than 339 million monthly active users, and based on traffic data from market intelligence company Similarweb, ChatGPT.com had 246 million unique monthly visitors. There’s likely some crossover, with people using both the mobile and web interfaces, though how big that group is remains uncertain. 

Though every single person that visits ChatGPT.com might not become a user, it’s safe to assume that ChatGPT’s Monthly Active Users are somewhere in the region of 500-600 million.  

That’s good, right? Its actual users are higher than officially claimed? Er, no. First, each user is a financial drain on the company, whether they’re a free or paid user. 

It would also suggest a conversion rate of 2.583% from free to paid users on ChatGPT — an astonishingly bad number, one made worse by the fact that every single user of ChatGPT, regardless of whether they pay, loses the company money.

It also feeds into a point I’ve repeatedly made in this newsletter, and in my podcast. Generative AI isn’t that useful. If Generative AI was genuinely this game-changing technology that makes it possible to simplify your life and your work, you’d surely fork over the $20 monthly fee for unlimited access to OpenAI’s more powerful models. I imagine many of those users are, at best, infrequent, opening up ChatGPT out of curiosity or to do basic things, and don’t have anywhere near the same levels of engagement as with any other SaaS app. 

While it's quite common for Silicon Valley companies to play fast and loose with metrics, this particular one is deeply concerning, and I hypothesize that OpenAI choosing to go with Weekly versus Monthly Active Users is an intentional attempt to avoid people calculating the conversion rate of its subscription products. As I will continue to repeat, these subscription products lose the company money.

Mea Culpa: My previous piece focused entirely on web traffic to ChatGPT.com, and did not have the data I now have related to app downloads. Nevertheless, it isn't obvious whether OpenAI is being honest about its weekly active users, because it won't even define how it measures them.

On Product Strategy

  • OpenAI makes most of its money from subscriptions (approximately $3 billion in 2024) and the rest on API access to its models (approximately $1 billion).
  • As a result, OpenAI has chosen to monetize ChatGPT and its associated products in an all-you-can-eat software subscription model, or otherwise make money by other people productizing it. In both of these scenarios, OpenAI loses money.
  • OpenAI's products are not fundamentally differentiated or interesting enough to be sold separately. It has failed — as with the rest of the generative AI industry — to meaningfully productize its models due to their massive training and operational costs and a lack of any meaningful "killer app" use cases.
  • The only product that OpenAI has succeeded in scaling to the mass market is the free version of ChatGPT, which loses the company money with every prompt. This scale isn't a result of any kind of product-market fit. It's entirely media-driven, with reporters making "ChatGPT" synonymous with "artificial intelligence."
    • As a result, I do not believe that generative AI is a "real" industry — which I define as one with multiple competitive companies with sustainable revenue streams and meaningful products with actual market penetration — because it is entirely subsidized by a combination of venture capital and hyperscaler cloud credits.
    • ChatGPT is popular because it is the only well-known product, one that's mentioned in basically every article on artificial intelligence. If this were a "real" industry, other competitors would have similar scale — especially those run by hyperscalers — but as I'll get to later, data suggests that OpenAI is the only company with any significant user base in the entire generative AI industry, and it is still wildly unprofitable and unsustainable.
  • OpenAI's models have been almost entirely commoditized.  Even its reasoning model o1 has been commoditized by both DeepSeek's R1 model and Perplexity's R1 1776 model, both of which offer similar outcomes at a much-discounted price, though it's unclear (and in my opinion unlikely) that these models are profitable to run.
  • OpenAI, as a company, is piss-poor at product. It's been two years and ChatGPT mostly does the same thing as it used to, still costs more to run than it makes, and ultimately does the same thing as every other LLM chatbot from every other generative AI company.
  • Moreover, OpenAI (like every other generative AI model developer) is incapable of solving the critical flaw with ChatGPT, namely its tendency to hallucinate — where it asserts something to be true, when it isn’t. This makes it a non-starter for most business customers, where (obviously) what you write has to be true.
    • Case in point: A BBC investigation just found that half of all AI-generated news articles have some kind of “significant” issue, whether that be hallucinated facts, editorialization, or references to outdated information.
    • And the reason why OpenAI hasn’t fixed the hallucination problem isn’t because it doesn’t want to, but because it can’t. They’re an inevitable side-effect of LLMs as a whole.
  • The fact that nobody has managed to make a mass market product by connecting OpenAI's models also suggests that the use cases just aren't there for mass market products powered by generative AI.
  • Furthermore, the fact that API access is such a small part of its revenue suggests that the market for actually implementing Large Language Models is relatively small. If the biggest player in the space only made a billion dollars in 2024 selling access to its models (unprofitably), and that amount is the minority of its revenue, there may not actually be a real industry here.
  • These realities — the lack of utility and product differentiation — also mean that OpenAI can’t raise its prices above the breakeven point, which would also likely make its generative AI unaffordable and unattractive to both business and personal customers. 

Counterpoint: OpenAI has a new series of products that could open up new revenue streams such as Operator, its "agent" product, and "Deep Research," their research product.

  • On costs: Both of these products are very compute intensive.
  • On Product-Market Fit:
    • To use Operator or Deep Research currently requires you to pay $200 a month for OpenAI's ChatGPT Pro, a $200-a-month subscription.
    • As a product, Operator barely works. As I covered a few weeks ago, this product — which claims to control your computer and does not appear to be able to do so consistently — is not even close to ready for prime time, nor do I think it has a market.
    • Deep Research has already been commoditized, with Perplexity and xAI launching their own versions almost immediately.
    • Deep Research is also not a good product. As I covered last week, the quality of writing that you receive from a Deep Research report is terrible, rivaled only by the appalling quality of its citations, which include forum posts and Search Engine Optimized content instead of actual news sources. These reports are neither "deep" nor well researched, and cost OpenAI a great deal of money to deliver.
  • On Revenue
    • Both Operator and Deep Research currently require you to pay for a $200-a-month subscription that loses the company money.
    • Neither product is sold on its own, and while they may drive revenue to the ChatGPT Pro product, as said above, said product loses OpenAI money.
    • These products are compute-intensive and have questionable outputs, making each prompt from a user both expensive and likely to be followed up with further prompts to get the outputs the user desired. As generative models don't "know" anything and are probabilistically generating answers, they are poor arbiters of quality information.

In summary, both Operator and Deep Research are expensive products to maintain, are sold through an expensive $200-a-month subscription that (like every other service provided by OpenAI) loses the company money, and due to the low quality of their outputs and actions are likely to increase user engagement to try and get the desired output, incurring further costs for OpenAI.

On The Future Prospects for OpenAI

  • A week or two ago, Sam Altman announced the "updated roadmap for GPT-4.5 and GPT-5.
    • GPT-4.5 will be OpenAI's "last chain-of-thought model," referring to the core functionality of its reasoning models.
    • GPT-5 will be, and I quote Altman, "a system that integrates a lot of our technology, including o3."
      • Altman also vaguely suggests that paid subscribers will be able to run GPT-5 at "a higher level of intelligence," which likely refers to being able to ask the models to spend more time computing an answer. He also suggests that it will "incorporate voice, canvas, search, deep research, and more."
    • Both of these statements vary from vague to meaningless, but I hypothesize the following:
      • GPT-4.5 will be an upgraded version of GPT-4o, OpenAI's foundation model, now codenamed Orion.
      • GPT-5 (which used to be called Orion) could be just about anything, but one thing that Altman mentioned in the tweet is that OpenAI's model offerings had gotten too complicated, and that it would be doing away with the ability to pick what model you used, gussying this up by claiming this was "unified intelligence.'
      • As a result of doing away with the model picker, I hypothesize that OpenAI will now attempt to moderate costs by picking what model will work best for a prompt — a process it will automate to questionable results.
    • I believe that this announcement is a very bad omen for OpenAI. Orion has been in the works for more than 20 months and was meant to be released at the end of last year, but was delayed due to multiple training runs that resulted in, to quote the Wall Street Journal, "software [that] fell short of the results researchers were hoping for."
      • As an aside, The Wall Street Journal refers to Orion as "GPT-5," but based on the copy and Altman's comments, I believe "Orion" refers to the foundation model. OpenAI appears to be calling a hodgepodge of different other models "GPT-5" now.
      • The Journal further adds that as of December Orion "perform[ed] better than OpenAI’s current offerings, but [hadn't] advanced enough to justify the enormous cost of keeping the new model running," with each six-month-long training run — no matter its efficacy — costing around $500 million.
      • OpenAI also, like every generative AI company, is running out of high-quality training data necessary to make the model "smarter" (based on benchmarks specifically made to make LLMs seem smart) — and note that "smarter" doesn't mean "new functionality."
      • Sam Altman deputizing Orion from GPT-5 to GPT-4.5 suggests that OpenAI has hit a wall with making its next model, requiring him to lower expectations for a model that OpenAI Japan president Tagao Nagasaki had suggested would "aim for 100 times more computational volume than GPT-4," which some took to mean "100 times more powerful" when it actually means "it will take way more computation to train or run inference on it."
      • If Sam Altman, a man who loves to lie, is trying to reduce expectations for a product, you should be worried.
    • Large Language Models — which are trained by feeding them massive amounts of training data and then reinforcing their understanding through further training runs — are hitting the point of diminishing returns. In simple terms, to quote Max Zeff of TechCrunch, "everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god."
    • It's unclear what the functionality of GPT-4.5 or GPT-5 will be. Does the market care about an even-more-powerful Large Language Model if said power doesn't lead to an actual product? Does the market care if "unified intelligence" just means stapling together various models to produce outputs?

As it stands, OpenAI has effectively no moat beyond its industrial capacity to train Large Language Models and its presence in the media. It can have as many users as it wants, but it doesn't matter because it loses billions of dollars, and appears to be continuing to follow the money-losing Large Language Model paradigm, guaranteeing it’ll lose billions more.

0

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

I feel like the numbers are interesting, but the opinions are much less useful.

Startups work by investing money into growth. Many tech companies are in the red for a very long time, that's largely the plan. This analysis doesn't capture how unusual this position of OpenAI/Anthropic is, vs. previous tech companies that have followed this basic strategy. I.E. Amazon, Uber, Tesla, etc.

I imagine many people around here think that Anthropic's $60B valuation is on the low side. If there's a decent market size of investors who believe that AI could really go crazy, then you get a valuation in that range. 

This is clarifying context, thanks. It's a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge. 

Currently, I don't see a strong case for that OpenAI and Anthropic are building up a moat.
–> Do you have any moats in mind that I missed? Curious.

Network effects aren't much of a moat here, since their users are mostly using the tools by themselves (though their prompts used to improve the tools; not sure how much). It doesn't seem a big deal for most users to switch to another competing chatting tool or image generation tool say. Potentially, current ChatGPT or Claude users can later move to using new model-based tools that are profitable for those AI companies. But as it stands, OpenAI and Anthropic are losing money on existing users on one end, while being under threat of losing users to cheap model alternatives on the other end. It's not clear whether with the head-start they got on releasing increasingly extractive, general use models that they are going to be 'winners'. Maybe their researchers will be the ones to come up with new capability breakthroughs, that will somehow be used to maintain an industry edge (incl. in e.g. military applications). But over the last two years, there has been more of a closing of the gap between the user functionality of newer versions of Claude and ChatGPT and cheaper competing models (like Meta’s and DeepSeek’s). OpenAI sunk hundreds of millions of dollars over 18 months into a model that was not worth calling GPT-5, and meanwhile other players caught up on model functionality of GPT-4.

OpenAI seems reflective of an industry where investment far outstrips user demand, as happened during the dotcom bubble.

This is not to say that there could not be large-model developers with at least tens of billion dollars in yearly profit within the next decade. That is what current investments and continued R&D are aimed towards. It seems the default scenario. Personally, I'll work hard to prevent that scenario since at that point restricting the development of increasingly unscoped (and harmful) models will basically be intractable.

I think there are serious risks for LLM development (i.e. a better DeepSeek could be released at any points), but also some serious opportunities.

1. The game is still early. It's hard to say what moats might exist 5 years from now. This is a chaotic field.
2. ChatGPT/Claude spend a lot of attention on their frontends, the API support, documentation, monitoring, moderation, lots of surrounding tooling. It's a ton of work to make a high production-grade service, besides just having one narrow good LLM.
3. There's always the chance of something like a Decisive Strategic Advantage later. 

Personally, if I were an investor, both would seem promising to me. Both are very risky - high chances of total failure, depending on how things play out. But that's common for startups. I'd bet that there's a good chance that moats will emerge later. 

Curated and popular this week
Relevant opportunities