Give me feedback! :)
If I were building a grantwriting bootcamp, my primary concerns would be:
This seems like a great initiative! However, I don't expect this "grantwriting bootcamp" model to benefit the majority of MATS alumni compared to:
I can't speak for the other AI safety research programs on your list, but from my experience at MATS, alumni are principally worried about (in order):
In the past, acceptance into the MATS extension program literally required submitting a grant proposal and receiving external funding, generally from the LTFF or Open Phil. On average, 81% of alumni who apply have received LTFF or Open Phil grants (or equivalent AI lab contracting roles) for independent research post-program, even without the current (large) RFPs! Our Research Management staff are trained to help scholars submit AI safety grant proposals and we require all scholars to complete a Research Plan mid-program that doubles as a grant proposal. I think that the optimal program to help most MATS alumni looks less like a "grantwriting bootcamp" (though I'm sure some will benefit) and more like the programs I listed above. That said, I think the grantwriting bootcamp model will certainly benefit some early career researchers. I'm happy to call and chat more about this if you like!
Also, Apollo Research and Leap Labs grew out of the MATS London Office (what later became LISA). I realize this was an AI safety office, not an EA office, but it feels significant.
TL;DR: MATS is fundraising for Summer 2025 and could support more scholars at $35k/scholar
Ryan Kidd here, MATS Co-Executive Director :)
The ML Alignment & Theory Scholars (MATS) Program is twice-yearly independent research and educational seminar program that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance and connect them with the Berkeley AI safety research community. The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and our Summer 2025 Program is set to begin in June 2025. We are currently accepting donations for our Summer 2025 Program and beyond. We would love to include additional interested mentors and scholars at $35k/scholar. We have substantially benefited from individual donations in the past and were able to support ~11 additional scholars due to Manifund donations.
MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research management, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and research management, greatly reducing the barriers to research mentorship. Immediately following each program is an optional extension phase in London where top performing scholars can continue research with their mentors. For more information about MATS, please see our recent reports: Alumni Impact Analysis, Winter 2023-24 Retrospective, Summer 2023 Retrospective, and Talent Needs of Technical AI Safety Teams.
You can see further discussion of our program on our website and Manifund page. Please feel free to AMA in the comments here :)
MATS is now hiring for three roles!
We are generally looking for candidates who:
Please apply via this form and share via your networks.
Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:
Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."