Hide table of contents

The Stanford Existential Risks Initiative (SERI) recently opened applications for the Winter 2022 Cohort of the ML Alignment Theory Scholars (MATS) Program, which aims to help aspiring alignment researchers enter the field by facilitating research seminars, workshops, an academic community, and an independent research project with an alignment research mentor. Applications close on Oct 24 and include a written response to (potentially hard) mentor-specific selection questions, viewable on our website.

Our current mentors include Alex Turner, Andrew Critch, Beth Barnes, Dan Hendrycks, Evan Hubinger, Jesse Clifton, John Wentworth, Nate Soares, Neel Nanda, Owain Evans, Quintin Pope, Rebecca Gorman, Richard Ngo, Stuart Armstrong, Vanessa Kosoy, Victoria Krakovna, and Vivek Hebbar.

Program details

MATS is a scientific and educational seminar and independent research program, intended to serve as an introduction to the field of AI alignment and allow networking with alignment researchers and institutions. The MATS Program Winter 2022 Cohort consists of:

  • A 6-week online training program (averaging 10-20 h/week from Nov 7 to Dec 14);
  • A 2-month in-person educational seminar and independent research program in Berkeley, California for select scholars (40 h/week from Jan 3 to Feb 24); and
  • Possible ongoing 2-month extensions for select scholars, potentially in Berkeley, California or London, UK.

During the research phase of the program, mentors will meet with scholars for around 1-2 h/week to share their research agenda and supervise the scholars’ research projects. Scholars' research directions will initially be chosen by the mentors, but by default, scholars are expected to develop their independent research direction as the program continues. Educational seminars and workshops will be held 2-3 times per week, similar to our Summer Seminar Program.

The MATS program is a joint initiative by the Stanford Existential Risks Initiative and the Berkeley Existential Risk Initiative, with support from Lightcone Infrastructure and Conjecture. We receive financial support from the Long-Term Future Fund.

Who is this program for?

Our ideal applicant has:

  • an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
  • previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
  • strong motivation to pursue a career in AI alignment research, particularly on longtermist grounds.

Even if you do not entirely meet these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.

How to apply

The program will run several concurrent streams, each for a different alignment research agenda. Read through the descriptions of each stream below and the associated candidate selection questions. To apply for a stream, submit an application via this portal, including your resume and a response to the appropriate candidate selection questions detailed on our website. We will assess your application based on your response and prior research experience. Feel free to apply for multiple streams—we will assess you independently for each.

Please note that the candidate selection questions can be quite hard, depending on the mentor! Allow yourself sufficient time to apply to your chosen stream/s. A strong application to one stream may be of higher value than moderate applications to several streams (though we will assess you independently).

Applications for the Winter 2022 Cohort are due by Oct 24.

Frequently asked questions

What are the key dates for MATS?

  • 9/24: Applications released
  • 10/24: Applications close
  • 11/02: Applicants accepted/rejected
  • 11/07 to 12/16: Training program (6 weeks, 10-20 h/week)
  • 1/3 to 2/24: Scholars program in Berkeley (8 weeks, 40 h/week)
  • 2/24 onwards: Potential extensions, pending mentor review, including the possibility of a London-based program

Are the key dates flexible?

We want to be flexible for applicants who have winter exams or start school earlier. Based on individual circumstances, we may be willing to alter the time commitment of the scholars program and allow scholars to leave or start early. Please tell us your availability when applying.

The in-person scholars program can be 20 h/week for very promising applicants with concurrent responsibilities, although we expect a strong involvement in the program and participation in most organized events.

Will this program be remote or in-person?

The training program and research sprint will be remote, and the scholars program will be in-person in Berkeley, CA. For exceptional applicants, we may be willing to offer the program online.

What does “financial support” concretely entail?

SERI itself cannot provide any funding; however, the Long-Term Future Fund has generously offered to provide a stipend totaling $6K for completing the training program and a stipend totaling $16K for completing the scholars program.

What is the long-term goal for MATS scholars?

We anticipate that after the MATS program, scholars will either seek employment at an existing alignment organization (e.g., Aligned AI, ALTER, Anthropic, ARC, CHAI, CLR, Conjecture, DeepMind, Encultured AI, FAR, MIRI, OpenAI, Redwood Research), continue academic research, or apply to the Long-Term Future Fund or the FTX Future Fund as an independent researcher.

What if I want to apply with an agenda independent of any mentor?

There is an option to apply with your own research proposals. This option is likely to be more selective than applying under a mentor; however, we are willing to accept outstanding applicants.

What should I expect from my mentor?

During the scholars’ program, you should expect to meet with your mentor for at least one hour per week, with more frequent communication via Slack. The extent of mentor support will vary depending on the project and the mentor. Scholars will also receive support from MATS’ Technical Generalist staff, who will serve as teaching assistants and may assist with research mentorship.

What training will the program offer?

MATS aims to have a strong emphasis on education in addition to fostering independent research. We plan to host some newly developed curricula, including an advanced alignment research curriculum, mentor-specific reading lists, workshops on model-building and rationality, and more. We plan to help scholars build their alignment research toolbox by hosting seminars and workshops with alignment researchers and providing an academic community of fellow alignment scholars and mentors with diverse research interests. MATS’ main goal is to help scholars, over time, become strong, independent researchers who can contribute to the field of AI alignment.

Can I join the program from outside the US?

MATS is a scientific and educational seminar and independent research program, and therefore scholars from outside the US can apply for B-1 visas (further information here). Scholars who come from Visa Waiver Program (VWP) Designated Countries can instead apply to the VWP via the Electronic System for Travel Authorization (ESTA), which is processed in three days. Scholars accepted into the VWP can stay up to 90 days in the US, while scholars who receive a B-1 visa can stay up to 180 days. Please note that B-1 visa approval times can be significantly longer than ESTA approval times, depending on your country of origin.

50

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Wasn't sure if there's an email to ask questions specifically for clarification to the app but hope you won't mind me asking here.

For Nate & Vivek's problems, it says "It is mandatory to attempt either #1a-c or #2."

I assume 1 a-c corresponds to what is actually labelled as 1.1, 1.2, and 1.3 from the contest problems?

And I suppose 2 is actually what is labelled as 3, i.e. the problem starting with "Solve alignment given these relaxations..."? Would that be correct?

(I'm helping Vivek and Nate run the consequentialist cognition MATS stream)

Yes, both of those are correct. The formatting got screwed up in a conversion, and should be fixed soon.

In the future, you could send Vivek or me a DM to contact our project specifically. I don't know what the official channel for general questions about MATS is.

The official channel for general questions about MATS is the contact form on our website.

Thank you!

Curated and popular this week
Relevant opportunities