We estimate that, as of June 12, 2024, OpenAI has an annualized revenue (ARR) of:

 $1.9B for ChatGPT Plus (7.7M global subscribers),
 $714M from ChatGPT Enterprise (1.2M seats),
 $510M from the API, and
 $290M from ChatGPT Team (from 980k seats)

(Full report in app.futuresearch.ai/reports/3Li1, methods described in futuresearch.ai/openai-revenue-report.)

We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities.

Our methods in brief: we searched exhaustively for all public information on OpenAI's finances, and filtered it to reliable data points. From this, we selected a method of calculation that required the minimal amount of inference of missing information.

To infer the missing information, we used the standard techniques of forecasters: fermi estimates, and base rates / analogies.

We're fairly confident that the true values are relatively close to what we report. We're still working on methods to assign confidence intervals on the final answers given the confidence intervals of all of the intermediate variables.

Inside the full report, you can see which of our estimates are most speculative, e.g. using the ratio of Enterprise seats to Teams seats from comparable apps; or inferring the US to non-US subscriber base across platforms from numbers about mobile subscribers, or inferring growth rates from just a few data points.

Overall, these numbers imply to us that:

  • Sam Altman's surprising claim of $3.4B ARR on June 12 seems quite plausible, despite skepticism people raised at the time.
  • Apps (consumer and enterprise) are much more important to OpenAI than the API.
  • Consumers are much more important to OpenAI than enterprises, as reflected in all their recent demos, but the enterprise growth rate is so high that this may change abruptly.
     

58

1
0
1

Reactions

1
0
1
Comments8


Sorted by Click to highlight new comments since:

We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities
 


Is this really true? I am quite surprised by this given how much of the expected financial value  of OpenAI (and valuation of AI companies more generally) is not in the next couple of months, but based on being at the frontier of a technology with enormous future potential.

 

Definitely. I think all contribute to their thinking - their current finances, the growth rates, and the expected value of their future plans that don't generate any revenue today.

Hi, thanks for this! Any idea how this compares to total costs?

Hi! We currently don't have a reliable estimate of the cost, but we might include it in the future.

We estimate that

Point of clarification, it seems like FutureSearch is largely powered by calls to AI models. When you say "we", what do you mean? Has a human checked the entire reasoning process that led to the results you present here? 

There were humans in the loop, yes.

I didn't check whether you addressed this, but an article from The Information claims that OpenAI's API ARR reached $1B as of March: https://www.theinformation.com/articles/a-peek-behind-openais-financials-dont-underestimate-china?rc=qcqkcj

A separate The Information article claims that OpenAI receives $200MM ARR as a cut of MSFT's OpenAI model-based cloud revenue, which I'm not sure is included in your breakdown: https://www.theinformation.com/articles/openais-annualized-revenue-doubles-to-3-4-billion-since-late-2023?rc=qcqkcj

These articles are not public though - they are behind a paywall.

The source for the $1B API revenue claim is given as "someone who viewed internal figures related to the business".

It's not completely implausible, but the implications for OpenAI's revenue growth curve would be a little surprising. 

We have fairly reliable numbers for ChatGPT Enterprise revenue (based on an official announcement of seats sold together with the price per seat quoted to someone who inquired) and ChatGPT Plus revenue (from e-receipt data) from the start of April; these sum to about $1.9B. It's reasonable to add another $300M to this to account for other smaller sources – early ChatGPT Team revenue, Azure (which we did indeed ignore), custom models.

So, with an extra $1B from the API on top of all that, we'd see only $200M revenue growth between the start of April and the middle of June, when it was announced as $3.4B – contrast with $1.2B between the start of January (December's ARR was $2B) and March (estimated $3.2B).

Curated and popular this week
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 2m read
 · 
Is now the time to add to RP’s great work?     Rethink’s Moral weights project (MWP) is immense and influential. Their work is the most cited “EA” paper written in the last 3 years by a mile - I struggle to think of another that comes close. Almost every animal welfare related post on the forum quotes the MWP headline numbers - usually not as gospel truth, but with confidence. Their numbers carry moral weight[1] moving hearts, minds and money towards animals. To oversimplify, if their numbers are ballpark correct then... 1. Farmed animal welfare interventions outcompete human welfare interventions for cost-effectiveness under most moral positions.[2] 2.  Smaller animal welfare interventions outcompete larger animal welfare if you aren’t risk averse. There are downsides in over-indexing on one research project for too long, especially considering a question this important. The MWP was groundbreaking, and I hope it provides fertile soil for other work to sprout with new approaches and insights. Although the concept of “replicability”  isn't quite as relevant here as with empirical research, I think its important to have multiple attempts at questions this important. Given the strength of the original work, any new work might be lower quality - but perhaps we can live with that. Most people would agree that more deep work needs to happen here at some stage, but the question might be is now the right time to intentionally invest in more?   Arguments against more Moral Weights work 1. It might cost more money than it will add value 2. New researchers are likely to land land on a similar approaches and numbers to RP so what's the point?[3] 3. RP’s work is as good as we are likely to get, why try again and get a probably worse product? 4. We don’t have enough new scientific information since the original project to meaningfully add to the work. 5. So little money goes to animal welfare work  now anyway, we might do more harm than good at least in the short t