Hide table of contents

In the spirit of  Career Conversations Week, throwing out some quick questions that I hope are also relevant for others in a similar position!

I'm an early-career person aiming to have a positive impact on AI safety. For a couple of years, I've been building skills towards a career in technical AI safety research, such as: 

  • Publishing ML safety research projects
  • Doing a Master's degree in machine learning at a top ML school
  • Generally focusing on building technical ML research skills and experience at the expense of other forms of career capital.

However, I'm now much more strongly considering paths to impact that route through AI governance, including AI policy, than pure technical alignment research. Since I still feel pretty junior, I think I have room to explore a bit. However, I'm not junior enough to have a fresh degree in front of me (e.g. to choose to study public policy), and I feel like I have a strong fit for technical ML skills and knowledge, including explaining technical concepts to non-technical audiences, that I want to leverage.

What are some of the best ways for people like me to transition from technical AI safety research roles into more explicit AI governance and policy? So far, I'm only really aware of:

  • Policy fellowships that might take technical researchers without policy experience, like the Horizon, STPI, PMF, or STPF fellowships
  • Policy positions in top AI labs, which are themselves important for AI governance and could transition well into other AI governance careers
  • Policy research positions that require significant technical knowledge at organizations like GovAI
  • Some vague notion of "being a trusted scientific advisor to key decision makers in DC or London," though I'm not sure what this practically looks like or how to get there.

Any other ideas? Or for those who have been in a similar situation, how have you thought about this?

This post is part of the September 2023 Career Conversations Week. You can see other Career Conversations Week posts here.

12

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

Here are some different options for people with technical backgrounds to pivot into policy careers:

  • Policy fellowships (see our database here) are a great entryway into DC policy careers for technical people (and for many non-technical people!). Fellowships are especially helpful for mid-career technical folks who would otherwise struggle to make the pivot because they’re both (1) too senior for the normal entryways (e.g. policy internships [such as in Congress], a policy-oriented (under)graduate degree, junior policy jobs), and (2) have too little policy experience to qualify for mid-level or senior policy jobs. There are policy fellowships for people from all experience levels.
  • Check out our advice on policy internships (the linked post targets undergraduates, but the internship advice applies more widely), which are the most common way for junior people to test their fit for policy and build policy-relevant career capital, whether they have technical or non-technical backgrounds.
    • You might also conduct a policy-relevant research project during a summer/winter research fellowships offered by organizations like GovAI, ERA, CHERI, SERI, and XLab.
  • If you’re currently enrolled in a technical (under)graduate degree, try to gain some policy-relevant knowledge, networks, and skills, via choosing some policy classes if you can, especially in science and technology policy, or choosing a policy-relevant thesis project.
  • Participate in policy-relevant online programs, like the AI Safety Fundamentals Course’s Governance Track, speaker series like this, or these AI policy and biosecurity policy online courses.
  • Consider doing a policy-relevant graduate degree, particularly a policy master’s or law degree. You can often get into these degree programs even if you have only done technical work in the past (ideally, you should be able to tell a narrative about how your interest in policy work is connected to your prior technical studies and/or work experience). Even if you already have a technical graduate degree, it might make sense to do another (short/part-time) policy degree if you’re serious about pivoting into policy but are otherwise struggling to make the switch.

One brief comment on mindset: Policy jobs typically don’t require people to have a particular subject background, though there are exceptions. There are plenty of people with STEM degrees and technical work experience who have pivoted into policy roles, often focused on science and technology (S&T) policy areas, where they can leverage their technical expertise for added credibility and impact. There are certain policy roles and institutions that prefer people with technical backgrounds, such as many roles in the White House OSTP, NSF, DOE, NIH, etc. So, you shouldn't feel like it's impossible to pivot from technical to policy work, and there are resources to help you with this pivot. We particularly recommend speaking with an 80,000 Hours career adviser about this. 

This is sublime, thank you!

(Mostly I don't know.)

On policy fellowships: also RAND TASP.

I think many reasonably important policy roles don't require policy experience—working for key congressional committees or federal agencies.

Reposting an anonymous addition from someone who works in policy:

Your list of options mostly matches how I think about this. I would add:

  • Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
  • In addition to GovAI, other orgs where people can do technical research for AI policy include:
    • RAND and Epoch AI
    • Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
    • AI labs
Comments1
Sorted by Click to highlight new comments since:

Hmm I’d very keen to see what an answer to this might look like. I know some people I work with are interested in making a similar kind of switch.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region