Software engineer interested in AI safety.
Here is a new blog post from 2025 on the subject. The new estimates are 600 technical AI safety FTEs and 500 non-technical AI safety FTEs (1100 in total).
Thanks for your feedback Sean.
Estimating the number of FTEs at the non-technical organizations is not straightforward since often only a fraction of the individuals are focused on AI safety. For each organization I guessed what fraction of the total FTEs were focused on AI safety though I may have overestimated in some cases (e.g. in the case of CFI I can decrease my estimate).
Also I'll include more frontier labs in the list of non-technical organizations.
The technical AI safety organizations cover a variety of different areas including AI alignment, AI security, interpretability, and evals with the most FTEs working on empirical AI safety topics like LLM alignment, jailbreaks, and robustness which covers a variety of different risks including misalignment and misuse.
Thanks for your feedback Ben.
I totally agree with point 1. and you're right that this post is really estimating the total number of people who work at AI safety organizations and then using this number as a proxy for estimating the size of the field. As you said, there are a lot of people who aren't completely focused on AI safety but still make significant contributions to the field. For example, a AI researcher might consider themselves to be an "LLM researcher" and split their time between non-AI safety work like evaluating models on benchmarks and AI safety work like new alignment methods. Such a researcher would not be counted in this post.
I might add an "other" category to the estimate to avoid this form of undercounting.
Regarding point 2, I collected the list of organizations and estimated the number of FTEs at each using a mixture of Google Search and Gemini Deep Research. The lists are my attempt to find as many AI safety organizations as possible though of course, I may be missing a few. If you can think of any that aren't in the list, I would appreciate if you shared them so that I can add them.
I would like to see a push towards increasing donations to x-risk reduction and longtermist charities. Last time I checked, only about 10% of GWWC donations were going to longtermist funds like the Long-Term Future Fund. Consequently, I think the x-risk and AI safety funding landscapes have been more reliant on big donors than they should be.
Hi Sanjay, I tried answering that question in this comment. In short, I think a few thousand FTEs seems like a minimally sufficient number based on the resources needed to solve similar historical problems.