Hide table of contents

The Stanford Existential Risks Initiative (SERI) recently opened applications for the Winter 2022 Cohort of the ML Alignment Theory Scholars (MATS) Program, which aims to help aspiring alignment researchers enter the field by facilitating research seminars, workshops, an academic community, and an independent research project with an alignment research mentor. Applications close on Oct 24 and include a written response to (potentially hard) mentor-specific selection questions, viewable on our website.

Our current mentors include Alex Turner, Andrew Critch, Beth Barnes, Dan Hendrycks, Evan Hubinger, Jesse Clifton, John Wentworth, Nate Soares, Neel Nanda, Owain Evans, Quintin Pope, Rebecca Gorman, Richard Ngo, Stuart Armstrong, Vanessa Kosoy, Victoria Krakovna, and Vivek Hebbar.

Program details

MATS is a scientific and educational seminar and independent research program, intended to serve as an introduction to the field of AI alignment and allow networking with alignment researchers and institutions. The MATS Program Winter 2022 Cohort consists of:

  • A 6-week online training program (averaging 10-20 h/week from Nov 7 to Dec 14);
  • A 2-month in-person educational seminar and independent research program in Berkeley, California for select scholars (40 h/week from Jan 3 to Feb 24); and
  • Possible ongoing 2-month extensions for select scholars, potentially in Berkeley, California or London, UK.

During the research phase of the program, mentors will meet with scholars for around 1-2 h/week to share their research agenda and supervise the scholars’ research projects. Scholars' research directions will initially be chosen by the mentors, but by default, scholars are expected to develop their independent research direction as the program continues. Educational seminars and workshops will be held 2-3 times per week, similar to our Summer Seminar Program.

The MATS program is a joint initiative by the Stanford Existential Risks Initiative and the Berkeley Existential Risk Initiative, with support from Lightcone Infrastructure and Conjecture. We receive financial support from the Long-Term Future Fund.

Who is this program for?

Our ideal applicant has:

  • an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
  • previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
  • strong motivation to pursue a career in AI alignment research, particularly on longtermist grounds.

Even if you do not entirely meet these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.

How to apply

The program will run several concurrent streams, each for a different alignment research agenda. Read through the descriptions of each stream below and the associated candidate selection questions. To apply for a stream, submit an application via this portal, including your resume and a response to the appropriate candidate selection questions detailed on our website. We will assess your application based on your response and prior research experience. Feel free to apply for multiple streams—we will assess you independently for each.

Please note that the candidate selection questions can be quite hard, depending on the mentor! Allow yourself sufficient time to apply to your chosen stream/s. A strong application to one stream may be of higher value than moderate applications to several streams (though we will assess you independently).

Applications for the Winter 2022 Cohort are due by Oct 24.

Frequently asked questions

What are the key dates for MATS?

  • 9/24: Applications released
  • 10/24: Applications close
  • 11/02: Applicants accepted/rejected
  • 11/07 to 12/16: Training program (6 weeks, 10-20 h/week)
  • 1/3 to 2/24: Scholars program in Berkeley (8 weeks, 40 h/week)
  • 2/24 onwards: Potential extensions, pending mentor review, including the possibility of a London-based program

Are the key dates flexible?

We want to be flexible for applicants who have winter exams or start school earlier. Based on individual circumstances, we may be willing to alter the time commitment of the scholars program and allow scholars to leave or start early. Please tell us your availability when applying.

The in-person scholars program can be 20 h/week for very promising applicants with concurrent responsibilities, although we expect a strong involvement in the program and participation in most organized events.

Will this program be remote or in-person?

The training program and research sprint will be remote, and the scholars program will be in-person in Berkeley, CA. For exceptional applicants, we may be willing to offer the program online.

What does “financial support” concretely entail?

SERI itself cannot provide any funding; however, the Long-Term Future Fund has generously offered to provide a stipend totaling $6K for completing the training program and a stipend totaling $16K for completing the scholars program.

What is the long-term goal for MATS scholars?

We anticipate that after the MATS program, scholars will either seek employment at an existing alignment organization (e.g., Aligned AI, ALTER, Anthropic, ARC, CHAI, CLR, Conjecture, DeepMind, Encultured AI, FAR, MIRI, OpenAI, Redwood Research), continue academic research, or apply to the Long-Term Future Fund or the FTX Future Fund as an independent researcher.

What if I want to apply with an agenda independent of any mentor?

There is an option to apply with your own research proposals. This option is likely to be more selective than applying under a mentor; however, we are willing to accept outstanding applicants.

What should I expect from my mentor?

During the scholars’ program, you should expect to meet with your mentor for at least one hour per week, with more frequent communication via Slack. The extent of mentor support will vary depending on the project and the mentor. Scholars will also receive support from MATS’ Technical Generalist staff, who will serve as teaching assistants and may assist with research mentorship.

What training will the program offer?

MATS aims to have a strong emphasis on education in addition to fostering independent research. We plan to host some newly developed curricula, including an advanced alignment research curriculum, mentor-specific reading lists, workshops on model-building and rationality, and more. We plan to help scholars build their alignment research toolbox by hosting seminars and workshops with alignment researchers and providing an academic community of fellow alignment scholars and mentors with diverse research interests. MATS’ main goal is to help scholars, over time, become strong, independent researchers who can contribute to the field of AI alignment.

Can I join the program from outside the US?

MATS is a scientific and educational seminar and independent research program, and therefore scholars from outside the US can apply for B-1 visas (further information here). Scholars who come from Visa Waiver Program (VWP) Designated Countries can instead apply to the VWP via the Electronic System for Travel Authorization (ESTA), which is processed in three days. Scholars accepted into the VWP can stay up to 90 days in the US, while scholars who receive a B-1 visa can stay up to 180 days. Please note that B-1 visa approval times can be significantly longer than ESTA approval times, depending on your country of origin.

50

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Wasn't sure if there's an email to ask questions specifically for clarification to the app but hope you won't mind me asking here.

For Nate & Vivek's problems, it says "It is mandatory to attempt either #1a-c or #2."

I assume 1 a-c corresponds to what is actually labelled as 1.1, 1.2, and 1.3 from the contest problems?

And I suppose 2 is actually what is labelled as 3, i.e. the problem starting with "Solve alignment given these relaxations..."? Would that be correct?

(I'm helping Vivek and Nate run the consequentialist cognition MATS stream)

Yes, both of those are correct. The formatting got screwed up in a conversion, and should be fixed soon.

In the future, you could send Vivek or me a DM to contact our project specifically. I don't know what the official channel for general questions about MATS is.

The official channel for general questions about MATS is the contact form on our website.

Thank you!

Curated and popular this week
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in AI safety