Hide table of contents

In the spirit of  Career Conversations Week, throwing out some quick questions that I hope are also relevant for others in a similar position!

I'm an early-career person aiming to have a positive impact on AI safety. For a couple of years, I've been building skills towards a career in technical AI safety research, such as: 

  • Publishing ML safety research projects
  • Doing a Master's degree in machine learning at a top ML school
  • Generally focusing on building technical ML research skills and experience at the expense of other forms of career capital.

However, I'm now much more strongly considering paths to impact that route through AI governance, including AI policy, than pure technical alignment research. Since I still feel pretty junior, I think I have room to explore a bit. However, I'm not junior enough to have a fresh degree in front of me (e.g. to choose to study public policy), and I feel like I have a strong fit for technical ML skills and knowledge, including explaining technical concepts to non-technical audiences, that I want to leverage.

What are some of the best ways for people like me to transition from technical AI safety research roles into more explicit AI governance and policy? So far, I'm only really aware of:

  • Policy fellowships that might take technical researchers without policy experience, like the Horizon, STPI, PMF, or STPF fellowships
  • Policy positions in top AI labs, which are themselves important for AI governance and could transition well into other AI governance careers
  • Policy research positions that require significant technical knowledge at organizations like GovAI
  • Some vague notion of "being a trusted scientific advisor to key decision makers in DC or London," though I'm not sure what this practically looks like or how to get there.

Any other ideas? Or for those who have been in a similar situation, how have you thought about this?

This post is part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.

12

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

Here are some different options for people with technical backgrounds to pivot into policy careers:

  • Policy fellowships (see our database here) are a great entryway into DC policy careers for technical people (and for many non-technical people!). Fellowships are especially helpful for mid-career technical folks who would otherwise struggle to make the pivot because they’re both (1) too senior for the normal entryways (e.g. policy internships [such as in Congress], a policy-oriented (under)graduate degree, junior policy jobs), and (2) have too little policy experience to qualify for mid-level or senior policy jobs. There are policy fellowships for people from all experience levels.
  • Check out our advice on policy internships (the linked post targets undergraduates, but the internship advice applies more widely), which are the most common way for junior people to test their fit for policy and build policy-relevant career capital, whether they have technical or non-technical backgrounds.
    • You might also conduct a policy-relevant research project during a summer/winter research fellowships offered by organizations like GovAI, ERA, CHERI, SERI, and XLab.
  • If you’re currently enrolled in a technical (under)graduate degree, try to gain some policy-relevant knowledge, networks, and skills, via choosing some policy classes if you can, especially in science and technology policy, or choosing a policy-relevant thesis project.
  • Participate in policy-relevant online programs, like the AI Safety Fundamentals Course’s Governance Track, speaker series like this, or these AI policy and biosecurity policy online courses.
  • Consider doing a policy-relevant graduate degree, particularly a policy master’s or law degree. You can often get into these degree programs even if you have only done technical work in the past (ideally, you should be able to tell a narrative about how your interest in policy work is connected to your prior technical studies and/or work experience). Even if you already have a technical graduate degree, it might make sense to do another (short/part-time) policy degree if you’re serious about pivoting into policy but are otherwise struggling to make the switch.

One brief comment on mindset: Policy jobs typically don’t require people to have a particular subject background, though there are exceptions. There are plenty of people with STEM degrees and technical work experience who have pivoted into policy roles, often focused on science and technology (S&T) policy areas, where they can leverage their technical expertise for added credibility and impact. There are certain policy roles and institutions that prefer people with technical backgrounds, such as many roles in the White House OSTP, NSF, DOE, NIH, etc. So, you shouldn't feel like it's impossible to pivot from technical to policy work, and there are resources to help you with this pivot. We particularly recommend speaking with an 80,000 Hours career adviser about this. 

This is sublime, thank you!

(Mostly I don't know.)

On policy fellowships: also RAND TASP.

I think many reasonably important policy roles don't require policy experience—working for key congressional committees or federal agencies.

Reposting an anonymous addition from someone who works in policy:

Your list of options mostly matches how I think about this. I would add:

  • Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
  • In addition to GovAI, other orgs where people can do technical research for AI policy include:
    • RAND and Epoch AI
    • Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
    • AI labs
Comments1
Sorted by Click to highlight new comments since:

Hmm I’d very keen to see what an answer to this might look like. I know some people I work with are interested in making a similar kind of switch.

More from GabeM
Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under
Relevant opportunities