Apply to ARBOx2: a programme to rapidly upskill in ML safety.

Join us (OAISI) in Oxford this July for our second iteration of ARBOx (Alignment Research Bootcamp Oxford), a 2-week intensive designed to rapidly build skills in ML safety. We’ll run a compressed version of the ARENA syllabus. During the programme, you’ll build gpt-2-small from scratch, learn interpretability techniques, understand RLHF, and replicate a paper or two.

Who should apply?
• We’re looking for applicants who aren’t currently familiar with mechanistic interpretability.
• We expect basic familiarity with linear algebra, programming in Python, and AI safety.
• You don’t need to be an Oxford student to participate - the bootcamp is designed to upskill participants in ML safety, targeting those who could derive meaningful professional benefit from this training, regardless of their specific career path or background.


We think you would be a good fit if you are a postgraduate student or a working professional, though we will also consider strong undergraduate applicants.

Programme details:
Dates: June 30th - July 11th, 2025.
Benefits: Central Oxford accommodation for non-Oxford residents, lunch, and potential support with travel expenses.
 

We’ll have lectures in the morning covering aspects of the syllabus, and the rest of the day will be spent pair-programming. During lunch break there will be short talks from experts in the field, and we plan to run a couple of socials for participants.

Apply now! Applications are rolling and close EOD on Sunday 25th May.
 

12

0
0
2

Reactions

0
0
2
Comments3
Sorted by Click to highlight new comments since:

Would you say this is more accurately an ML safety upskilling bootcamp or a mechanistic interpretability bootcamp? 

Hi! Half of the time is spent on MechInterp, the other half on other topics (RL and paper replication).

Curated and popular this week
Relevant opportunities