Hide table of contents

In the spirit of  Career Conversations Week, throwing out some quick questions that I hope are also relevant for others in a similar position!

I'm an early-career person aiming to have a positive impact on AI safety. For a couple of years, I've been building skills towards a career in technical AI safety research, such as: 

  • Publishing ML safety research projects
  • Doing a Master's degree in machine learning at a top ML school
  • Generally focusing on building technical ML research skills and experience at the expense of other forms of career capital.

However, I'm now much more strongly considering paths to impact that route through AI governance, including AI policy, than pure technical alignment research. Since I still feel pretty junior, I think I have room to explore a bit. However, I'm not junior enough to have a fresh degree in front of me (e.g. to choose to study public policy), and I feel like I have a strong fit for technical ML skills and knowledge, including explaining technical concepts to non-technical audiences, that I want to leverage.

What are some of the best ways for people like me to transition from technical AI safety research roles into more explicit AI governance and policy? So far, I'm only really aware of:

  • Policy fellowships that might take technical researchers without policy experience, like the Horizon, STPI, PMF, or STPF fellowships
  • Policy positions in top AI labs, which are themselves important for AI governance and could transition well into other AI governance careers
  • Policy research positions that require significant technical knowledge at organizations like GovAI
  • Some vague notion of "being a trusted scientific advisor to key decision makers in DC or London," though I'm not sure what this practically looks like or how to get there.

Any other ideas? Or for those who have been in a similar situation, how have you thought about this?

This post is part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.

12

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

Here are some different options for people with technical backgrounds to pivot into policy careers:

  • Policy fellowships (see our database here) are a great entryway into DC policy careers for technical people (and for many non-technical people!). Fellowships are especially helpful for mid-career technical folks who would otherwise struggle to make the pivot because they’re both (1) too senior for the normal entryways (e.g. policy internships [such as in Congress], a policy-oriented (under)graduate degree, junior policy jobs), and (2) have too little policy experience to qualify for mid-level or senior policy jobs. There are policy fellowships for people from all experience levels.
  • Check out our advice on policy internships (the linked post targets undergraduates, but the internship advice applies more widely), which are the most common way for junior people to test their fit for policy and build policy-relevant career capital, whether they have technical or non-technical backgrounds.
    • You might also conduct a policy-relevant research project during a summer/winter research fellowships offered by organizations like GovAI, ERA, CHERI, SERI, and XLab.
  • If you’re currently enrolled in a technical (under)graduate degree, try to gain some policy-relevant knowledge, networks, and skills, via choosing some policy classes if you can, especially in science and technology policy, or choosing a policy-relevant thesis project.
  • Participate in policy-relevant online programs, like the AI Safety Fundamentals Course’s Governance Track, speaker series like this, or these AI policy and biosecurity policy online courses.
  • Consider doing a policy-relevant graduate degree, particularly a policy master’s or law degree. You can often get into these degree programs even if you have only done technical work in the past (ideally, you should be able to tell a narrative about how your interest in policy work is connected to your prior technical studies and/or work experience). Even if you already have a technical graduate degree, it might make sense to do another (short/part-time) policy degree if you’re serious about pivoting into policy but are otherwise struggling to make the switch.

One brief comment on mindset: Policy jobs typically don’t require people to have a particular subject background, though there are exceptions. There are plenty of people with STEM degrees and technical work experience who have pivoted into policy roles, often focused on science and technology (S&T) policy areas, where they can leverage their technical expertise for added credibility and impact. There are certain policy roles and institutions that prefer people with technical backgrounds, such as many roles in the White House OSTP, NSF, DOE, NIH, etc. So, you shouldn't feel like it's impossible to pivot from technical to policy work, and there are resources to help you with this pivot. We particularly recommend speaking with an 80,000 Hours career adviser about this. 

This is sublime, thank you!

(Mostly I don't know.)

On policy fellowships: also RAND TASP.

I think many reasonably important policy roles don't require policy experience—working for key congressional committees or federal agencies.

Reposting an anonymous addition from someone who works in policy:

Your list of options mostly matches how I think about this. I would add:

  • Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
  • In addition to GovAI, other orgs where people can do technical research for AI policy include:
    • RAND and Epoch AI
    • Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
    • AI labs
Comments1
Sorted by Click to highlight new comments since:

Hmm I’d very keen to see what an answer to this might look like. I know some people I work with are interested in making a similar kind of switch.

More from GabeM
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T
Relevant opportunities