Give me feedback! :)
As part of MATS' compensation reevaluation project, I scraped the publicly declared employee compensations from ProPublica's Nonprofit Explorer for many AI safety and EA organizations (data here) in 2019-2023. US nonprofits are required to disclose compensation information for certain highly paid employees and contractors on their annual Form 990 tax return, which becomes publicly available. This includes compensation for officers, directors, trustees, key employees, and highest compensated employees earning over $100k annually. Therefore, my data does not include many individuals earning under $100k, but this doesn't seem to affect the yearly medians much, as the data seems to follow a lognormal distribution, with mode ~$178k in 2023, for example.
I generally found that AI safety and EA organization employees are highly compensated, albeit inconsistently between similar-sized organizations within equivalent roles (e.g., Redwood and FAR AI). I speculate that this is primarily due to differences in organization funding, but inconsistent compensation policies may also play a role.
I'm sharing this data to promote healthy and fair compensation policies across the ecosystem. I believe that MATS salaries are quite fair and reasonably competitive after our recent salary reevaluation, where we also used Payfactors HR market data for comparison. If anyone wants to do a more detailed study of the data, I highly encourage this!
I decided to exclude OpenAI's nonprofit salaries as I didn't think they counted as an "AI safety nonprofit" and their highest paid current employees are definitely employed by the LLC. I decided to include Open Philanthropy's nonprofit employees, despite the fact that their most highly compensated employees are likely those under the Open Philanthropy LLC.
If I were building a grantwriting bootcamp, my primary concerns would be:
This seems like a great initiative! However, I don't expect this "grantwriting bootcamp" model to benefit the majority of MATS alumni compared to:
I can't speak for the other AI safety research programs on your list, but from my experience at MATS, alumni are principally worried about (in order):
In the past, acceptance into the MATS extension program literally required submitting a grant proposal and receiving external funding, generally from the LTFF or Open Phil. On average, 81% of alumni who apply have received LTFF or Open Phil grants (or equivalent AI lab contracting roles) for independent research post-program, even without the current (large) RFPs! Our Research Management staff are trained to help scholars submit AI safety grant proposals and we require all scholars to complete a Research Plan mid-program that doubles as a grant proposal. I think that the optimal program to help most MATS alumni looks less like a "grantwriting bootcamp" (though I'm sure some will benefit) and more like the programs I listed above. That said, I think the grantwriting bootcamp model will certainly benefit some early career researchers. I'm happy to call and chat more about this if you like!
Also, Apollo Research and Leap Labs grew out of the MATS London Office (what later became LISA). I realize this was an AI safety office, not an EA office, but it feels significant.
Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:
Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."