JoshYou

Data Analyst @ Epoch AI
636 karmaJoined Working (6-15 years)Brooklyn, NY, USA

Comments
82

If they time the subsidized user push right their model of expected annual recurring revenue is $10B/y and $11B in 2025 is possible

OpenAI says they already hit $10B annual recurring revenue, for what it's worth. They don't provide a breakdown but they do say this excludes major one-time deals and the licensing fees Microsoft pays to use OpenAI models in its own products (this is a substantial source of revenue, but I'm guessing OpenAI excludes this to avoid being accused of using wash transactions to juice their numbers: they in turn pay Microsoft for the servers to train and run their models).

Based on OpenAI only having 3M Team+Enterprise+Edu subscribers in May, I don't think this $10B/year rate was achieved via $1 Team trial subscriptions.

I think you are reading too much into the growth rate of free users. OpenAI has made a recent push into acquiring lots of new free users, e.g. by making signups easier and putting ChatGPT on WhatsApp, which makes their conversion rate look worse. But their revenue, which comes from paid subscribers and API usage, is still growing at a very healthy and relatively steady rate (3x from $3.4B last year, and 10x from $1B in August 2023) and my guess is that it will continue to grow rapidly.

(comment originally posted on Twitter, Cheryl's response here)

I'll flag that estimating firm-level training compute with [Epoch AI's] notable models dataset will produce big underestimates. E.g. with your methodology, OpenAI spent ~4e25 FLOP on training and 1.3e25 FLOP on research in 2023 and 2024. the latter would cost ~$30 million. but we know OpenAI spent at least $1 billion on research in 2024! (also note they spent $1 billion on research compute after amortizing this cost with an undisclosed schedule).

But I don't have a great sense of how sensitive your results are to this issue.

(this raises other questions: what did OpenAI spend $3 billion in training compute on in 2024? that's enough for 50 GPT-4 sized models. Maybe my cost accounting is quite different from OpenAI's. A lot of that "training" compute might really be more experimental)

Answer by JoshYou13
4
0

Note that Thorstad's arguments apply more against strong longtermism, i.e. that future generations are overwhelmingly or astronomically more important than current generations, not merely that they are important or even much more important than current generations. 

They could exclusively deploy their best models internally, or limit the volume of inference that external users can do, if running AI researchers to do R&D is compute-intensive. 

There are already present-day versions of this dilemma. OpenAI claims that DeepSeek used OpenAI model outputs to train its own models, and OpenAI doesn't reveal their reasoning models' full chains of thought to prevent competitors from using it as training data. 

Kinda weird that the story contains an intelligence explosion that happens both incredibly fast and incredibly soon but glosses over how it happens in a single paragraph, in favor of descriptions of nanobots dematerializing people.

This is worth considering, but FWIW, 50 GW would be around 10% of US electricity if it runs continuously (the US consumes at a rate of about 500 GW if you divide total consumption by one year). If the new capacity is as clean as the overall electric grid that would be about 2.5% of US emissions (25% of US emissions come from electricity) and 0.35% of global emissions (US emissions are 1/7 of global emissions). 

I'm not going to do this math now but I think if the new capacity is 100% natural gas then that's about as carbon-intense as the US electric grid as a whole, or maybe somewhat worse (the US has a lot of clean energy, but it also has coal plants which are >2x more carbon intense than gas). 100% natural gas would be the worst case, because there is no scenario where the US builds new coal plants (edit: it's not the worst case, because increased power demand could cause the delay of coal plant retirements, but I don't think this changes the conclusion all that much)

Answer by JoshYou10
0
0

R1 is probably not 6x cheaper than o1-mini and 30x cheaper than o1 in terms of the actual, underlying cost. (meaning that DeepSeek probably charges a much lower gross margin on its API than OpenAI does).  R1 has 37B active parameters (though its 671B total parameters are also relevant). We don't know how many parameters o1-mini or o1 have, but IMO they're probably a lot less than ~200B and ~1T, respectively.

I'm not proposing any sort of hard rule against concluding that some people's lives are net negative/harmful. As a heuristic, you shouldn't think it's bad to save the lives of ordinary people who seem to be mostly reasonable, but who contribute to harmful animal agriculture.

The pluralism here is between human viewpoints in general. Very naively, if you think every human has equal insight into morality you should maximize the lifespan and resources that go to any and all humans without considering at all what they will do. That's too much pluralism, of course, but I think refraining from cheaply saving human lives because they'll eat meat is too far in the other direction.

Answer by JoshYou11
5
3

I think if you put some weight on viewpoint pluralism you should mostly not conclude that other peoples' lives aren't valuable because those people will make the wrong moral choices.

Load more