Thanks a lot for engaging!
One general point: My rough guess is that acceptance rates have stayed largely constant across AI safety programs over the last ~2 years because capacity has scaled with interest. For example, Pivotal grew from 15 spots in 2024 to 38 in 2025. While the 'tail' likely became more exceptional, my sense is that the bar for the marginal admitted fellow has stayed roughly the same.
They might (as I am) be making as many applications as they have energy for, such that the relevant counterfactual is another application, rather than free time.
The model does assume that most applicants aren't spending 100% of their time/energy on applications. However, even if they were, I feel like a lot of this is captured by how much they value their time. I think that the counterfactual of how they spend their time during the fellowship period (which is >100x more hours than the application process) is the much more important variable to get right.
you also need to consider the intangible value of the counterfactual
This is correct. I assumed most people would take this into account (e.g. subtract their current job's networking value from the fellowship's value), but I might add a note to make this explicit.
you also ought to consider the information value of applying for whatever else you might have spent the time on
I’m less worried about this one. Since we set the fixed Value of Information quite conservatively already, and most people aren't constantly working on applications, I suspect this is usually small enough to be noise in the final calculation.
there is a psychological cost to firing out many low-chance applications
I agree this is real, but I think it's covered in the Value of Your Time. If you earn £50/hr but find applying on the weekend fun/interesting, you might set the Value of Your Time at £5/hr. If you are unemployed but find applying extremely aversive, you might price your time at e.g., £200/hr.
the opportunity to make more direct contact with the reality of the dynamics presently shaping frontier AI development – dynamics about which I’ve been writing from a greater distance for many years.
You doing this well could be very valuable for the AI safety field imo. It’s hard to form accurate beliefs about these dynamics from the outside, and I see many people unsure how much to trust Anthropic. Helping clarify this could help others to make more confident and informed decisions in situations where their view of Anthropic matters.
because EAs are the primary culprits in EA’s recent reputational dip
I agree, EA was just unusually fertile ground for a self-inflicted reputational dip but I don't think that "Jumping ship" is very explanatory (outside maybe AI Policy circles). EA's have been self-critical before EAs did bad things, many people (incl. me, guilty) have always felt uncomfortable identifying as EA. Many prominent figures also never seemed very committed to a single, persistent EA community. See for example this short exchange between Owen Cotton-Barret and Will MacAskill from 2017 (~4:30-5:30):
Owen Cotton-Barrat: When science was still relatively small, everyone could be in touch with everybody else. But now science works as a global discipline where lots of people subscribe to a scientific mindset. But there isn't a science community, there are lots of science communities. And I think in the longterm we need something like this with effective altruism.
Will MacAskill: This sounds pretty plausible in the long-run. The question is at what stage are we analogously to scientific development?
Owen Cotton-Barrat: In the spirit of being bold, I think this is something we should be paying attention to within a decade.
Will MacAskill: Ok, that seems reasonable.
When I first encountered EA, the ethos was very much focused around earning to give and where to donate. There was a sense we were fans/supporters of these orgs rather than competing for jobs at them and that all of us were on equal footing no matter how much we earned, gave, or followed the news.
I’m curious what fraction of early earn-to-givers now donate to organisations their peers founded vs. still giving to 'old' charities (AMF, The Humane League). My loose impression is that it's pretty low, which could be because (a) they don't see EA startups reaching their impact bar, (b) those startups aren’t (perceived as) funding constrained, or (c) factors you describe here.
I’d also guess that eating more protein improves public health in countries where high body weight causes health problems, since protein makes it easier to eat fewer calories.
But the largest increases in animal protein consumption are likely coming from countries that aren’t (yet) facing issues with obesity?
The nonprofit will be compensated tens of billions by the for-profit entity for the removal of the caps.
- False — The nonprofit is getting $130 billion, more than I expected, but only because OpenAI’s valuation skyrocketed.
Why is this false? The valuation in Oct. 2024 valuation was $157B, which means it has ~3.1x since. So wouldn't the compensation of 130/3.1 = ~$42B still be "tens of billions" in May 2025 terms?
I've found the following diagram of the approx. stakes of OpenAI stakeholders useful for understanding the situation:
Interesting question. Some potentially tangential considerations:
The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn't make me hopeful, especially the bolded parts: