Ryan Kidd

Co-Executive Director @ MATS
757 karmaJoined Working (0-5 years)Berkeley, CA, USA
matsprogram.org

Bio

Participation
6

Give me feedback! :)

Comments
32

Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:

  1. MATS believes a large part of our impact comes via accelerating researchers who might still enter AI safety, but would otherwise take significantly longer to spin up as competent researchers, rather than converting people into AIS researchers. MATS highly recommends that applicants have already completed AI Safety Fundamentals and most of our applicants come from personal recommendations or AISF alumni (though we are considering better targeted advertising to professional engineers and established academics). Here is a simplified model of the AI safety technical research pipeline as we see it.

    Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."
  2. The "push vs. pull" model seems to neglect that e.g. many MATS scholars had highly paid roles in industry (or de facto offers given their qualifications) and chose to accept stipends at $30-50/h because working on AI safety is intrinsically a "pull" for a subset of talent and there were no better options. Additionally, MATS stipends are basically equivalent to LTFF funding; scholars are effectively self-employed as independent researchers, albeit with mentorship, operations, research management, and community support. Also, 63% of past MATS scholars have applied for funding immediately post-program as independent researchers for 4+ months as part of our extension program (many others go back to finish their PhDs or are hired) and 85% of those have been funded. I would guess that the median MATS scholar is slightly above the level of the median LTFF grantee from 2022 in terms of research impact, particularly given the boost they give to a mentor's research.
  3. Comparing the cost of funding marginal good independent researchers ($80k/year) to the cost of producing a good new researcher ($40k) seems like a false equivalence if you can't have one without the other. I believe the most taut constraint on producing more AIS researchers is generally training/mentorship, not money. Even wizard software engineers generally need an on-ramp for a field as pre-paradigmatic and illegible as AI safety. If all MATS' money instead went to the LTFF to support further independent researchers, I believe that substantially less impact would be generated. Many LTFF-funded researchers have enrolled in MATS! Caveat: you could probably hire e.g. Terry Tao for some amount of money, but this would likely be very large. Side note: independent researchers are likely cheaper than scholars in managed research programs or employees at AIS orgs because the latter two have overhead costs that benefit researcher output.
  4. Some of the researchers who passed through AISC later did MATS. Similarly, several researchers who did MLAB or REMIX later did MATS. It's often hard to appropriately attribute Shapley value to elements of the pipeline, so I recommend assessing orgs addressing different components of the pipeline by how well they achieve their role, and distributing funds between elements of the pipeline based on how much each is constraining the flow of new talent to later sections (anchored by elasticity to funding). For example, I believe that MATS and AISC should be assessed by their effectiveness (including cost, speedup, and mentor time) at converting "informed talent" (i.e., understands the scope of the problem) into "empowered talent" (i.e., can iterate on solutions and attract funding/get hired). This said, MATS aims to improve our advertising towards established academics and software engineers, which might bypass the pipeline in the diagram above. Side note: I believe that converting "unknown talent" into "informed talent" is generally much cheaper than converting "informed talent" into "empowered talent."
  5. Several MATS mentors (e.g., Neel Nanda) credit the program for helping them develop as research leads. Similarly, several MATS alumni have credited AISC (and SPAR) for helping them develop as research leads, similar to the way some Postdocs or PhDs take on supervisory roles on the way to Professorship. I believe the "carrying capacity" of the AI safety research field is largely bottlenecked on good research leads (i.e., who can scope and lead useful AIS research projects), especially given how many competent software engineers are flooding into AIS. It seems a mistake not to account for this source of impact in this review.

If I were building a grantwriting bootcamp, my primary concerns would be:

  • Where will successful grantees work?
    • I've found that independent researchers greatly benefit from a shared office space and community, for social connection, high-quality peer feedback, and centralizing operations costs.
    • Current AI safety offices seem to be overflowing. We likely need further, high-capacity AI safety offices to support the influx of independent researchers from Open Phil's RFPs.
    • I think that, in general, employment in a highly effective organization is more impactful than independent research for the majority of projects and researchers. While I greatly support the new Open Phil RFPs, I hope that more of their grants go towards setting up highly effective organizations, like nonprofit FROs, that can absorb and scale talent.
    • I see the primary benefit of the MATS extension program as a means of providing further research mentorship (albeit with more accountability and autonomy than the main program) with longer time horizons to complete research projects. The infrastructure we provide is quite significant and increasing the number of independent researchers without also scaling long-term support systems will likely not see optimal results.
  • How will successful grantees obtain mentorship and high-quality feedback loops?
    • Even with the optimal project proposal, emerging researchers seem to benefit substantially from high quality mentorship, particularly over the course of a research project. I do not believe that all of this support should be front-loaded.
    • I would support an accompanying long-term peer support or mentorship program after the grantwriting bootcamp. I apologize if you were already planning this!
  • Who will employ grantees on the conclusion of their research?
    • This is a significant question to MATS as well. I currently believe that high-quality research during the program is a strong enough output alone to justify the cost. However, ideally, most MATS alumni would find employment post-program. The main roadblocks to this employment seem to be software engineering skills (which points to ARENA-like coding bootcamps as a solution) and high-quality peer-reviewed publications (which usually need strong mentorship).
    • At the moment, I think "better grant proposals" is not a significant bottleneck to MATS alumni getting jobs. Rather, I think coding skills and high quality publications are the limiting factors. Also, I think there are far too few jobs to go around compared to the scale of the AI safety problem, so I also support more startup accelerators.

This seems like a great initiative! However, I don't expect this "grantwriting bootcamp" model to benefit the majority of MATS alumni compared to:

  • Longer duration, more resourced independent research programs like the MATS extension program, Constellation Visiting Fellows Program, and the CHAI Research Fellowship that provide high-quality mentorship and job security for 6-12 months;
  • Dedicated start-up incubator programs like Catalyze Impact, Entrepeneur First def/acc, and Y Combinator that provide seed funding, co-founder matching, and start-up advice;
  • Longer duration, more established academic PhD/Postdoc programs that provide high-quality mentorship and legible credentials, e.g. via the Vitalik Buterin PhD Fellowship in AI Existential Safety and labs like UCB CHAI, NYU ARG, MIT AAG, Mila, KASL, MIT Tegmark Group, etc.

I can't speak for the other AI safety research programs on your list, but from my experience at MATS, alumni are principally worried about (in order):

  1. Software engineering skills to pass AI lab coding tests;
  2. High-quality peer-reviewed publications at top ML conferences to get into AI lab research jobs or top PhD programs;
  3. Further high-quality mentorship;
  4. Further connections to research collaborators.

In the past, acceptance into the MATS extension program literally required submitting a grant proposal and receiving external funding, generally from the LTFF or Open Phil. On average, 81% of alumni who apply have received LTFF or Open Phil grants (or equivalent AI lab contracting roles) for independent research post-program, even without the current (large) RFPs! Our Research Management staff are trained to help scholars submit AI safety grant proposals and we require all scholars to complete a Research Plan mid-program that doubles as a grant proposal. I think that the optimal program to help most MATS alumni looks less like a "grantwriting bootcamp" (though I'm sure some will benefit) and more like the programs I listed above. That said, I think the grantwriting bootcamp model will certainly benefit some early career researchers. I'm happy to call and chat more about this if you like!

Also, Apollo Research and Leap Labs grew out of the MATS London Office (what later became LISA). I realize this was an AI safety office, not an EA office, but it feels significant.

Yep, seems important. But I don't think this is particularly salient to the topic of the post: changes to AI safety priorities based on the new inference scaling paradigm.

Answer by Ryan Kidd15
1
0

TL;DR: MATS is fundraising for Summer 2025 and could support more scholars at $35k/scholar

Ryan Kidd here, MATS Co-Executive Director :)

The ML Alignment & Theory Scholars (MATS) Program is twice-yearly independent research and educational seminar program that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance and connect them with the Berkeley AI safety research community. The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and our Summer 2025 Program is set to begin in June 2025. We are currently accepting donations for our Summer 2025 Program and beyond. We would love to include additional interested mentors and scholars at $35k/scholar. We have substantially benefited from individual donations in the past and were able to support ~11 additional scholars due to Manifund donations.

MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research management, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and research management, greatly reducing the barriers to research mentorship. Immediately following each program is an optional extension phase in London where top performing scholars can continue research with their mentors. For more information about MATS, please see our recent reports: Alumni Impact Analysis, Winter 2023-24 Retrospective, Summer 2023 Retrospective, and Talent Needs of Technical AI Safety Teams.

You can see further discussion of our program on our website and Manifund page. Please feel free to AMA in the comments here :)

Yeah, we deliberately refrained from commenting much on the talent needs for founding new orgs. Hopefully, we will have more to say on this later, but it feels somewhat pinned to AI safety macrostrategy, which is complicated.

Cheers, Jamie! Keep in mind, however, that these are current needs, and teenagers will likely be facing a job market with future needs. As we say in the report:

...predictions about future talent needs from interviewees didn’t consistently point in the same direction.

Answer by Ryan Kidd10
1
0

MATS is now hiring for three roles!

  • Program Generalist (London) (1 hire, starting ASAP);
  • Community Manager (Berkeley) (1 hire, starting Jun 3);
  • Research Manager (Berkeley) (1-3 hires, starting Jun 3).

We are generally looking for candidates who:

  • Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
  • Want to help the team with high-level strategy;
  • Are self-motivated and can take on new responsibilities within MATS over time; and
  • Care about what is best for the long-term future, independent of MATS’ interests.

Please apply via this form and share via your networks.

Cheers, Nick! We decided to change the title to "retrospective" based on this and some LessWrong comments.

Load more