Hide table of contents

TL;DR – Individuals with technical backgrounds are well-positioned to advance AI safety policy in numerous ways. Opportunities include both A) technical research directions, including evals, compute governance mechanisms, infosec, and forecasting, and B) career paths for advancing AI policy, including policymaking and communication efforts. Contrary to what many people assume, most of these roles do not require extensive political backgrounds or unusually good social skills, and many technical people who aren’t currently considering these paths would be a good fit. This post aims to encourage such people to consider these options.

Introduction

Over the past couple years, there has been a substantial increase in enthusiasm in the AI safety community for addressing AI risk through policy or governance solutions. Yet despite growing excitement for more policy work to address AI risk, many people with technical backgrounds may underestimate their personal fit for contributing to this area. Moreover, there are numerous sub-areas within the AI governance ecosystem where folks with technical backgrounds are in high demand.

This post aims to encourage technically minded individuals who are interested in addressing AI risk to consider working on AI governance.

If you have a technical background and have dismissed the idea of engaging in governance work because you see yourself as more STEM-y or not a "politics person," it's worth considering if you’ve dismissed these paths too hastily. In order to break into many governance paths, you do not need to have deep involvement in politics or extensive preexisting knowledge of political workings.

The current state of AI policy – proposals often lack sufficient details for implementation and policymakers often have insufficient technical understanding

Looking at actual proposals that may have had relevance for catastrophic risks from AI,[1] there are many areas where ideas need to be fleshed out more or where the proposal passes the buck to some other group to figure out specifics. For instance, Biden’s Executive Order called on various agencies to “establish guidelines and best practices... for developing and deploying safe, secure, and trustworthy AI systems, including [by] launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity” (Section 4.1(a)(i)).[2] It still isn’t clear what these guidelines and best practices should entail, not just on the implementation level, but also on higher-level questions.

Other proposals similarly don’t answer these questions. There’s much talk about adding “guardrails” and performing “evaluations,” especially with regards to CBRN threats, but less clarity on what these would actually involve or the processes for deciding what’s “good enough.” SB1047, meanwhile, would have instituted a liability regime, effectively leaving it to companies themselves to develop specific safety policies.[3]

On top of vague proposals, there are many areas of AI policy where key decision-makers possess insufficient understanding. Worryingly, this dearth of understanding includes many policymakers who have jurisdiction over AI (e.g., due to serving on relevant committees).

As some examples, I’m aware of cases in which influential policymakers on AI have demonstrated a lack of understanding about each of the following points that are relevant for governance:

  • what “open sourcing” refers to[4]

  • the fact that it’s easy to fine-tune LLMs to remove guardrails

  • the reality that many AI companies aim to create AI agents (and the risks these agents would introduce)
  • the overall rate of AI progress
  • etcetera.

To add some color to the above list, I’ve heard one anecdote of an influential policymaker who until recently was unaware that fine-tuning a model can be done with a small fraction of the compute needed to train the model to begin with. Apparently, learning this fact shifted this policymaker to being in favor of restricting open sourcing of certain AI models, indicating how lack of relevant technical understanding can be relevant to policy decisions.

People with technical backgrounds can help

If you have a technical background, you might be a good fit for work to help improve the current situation, such as by working to figure out technical specifics to make proposals workable or working to educate decision-makers about technical issues.

Recently, there has been increasing knowledge in some corners of the AI safety community that techies can help with governance efforts, but I suspect many people who are interested in working to reduce AI risk are unaware of the degree to which this is the case. It may now be common knowledge that techies can advance governance efforts via work on evals – crafting relevant metrics, benchmarks, testing environments, and other testing protocols – but other areas of technical AI governance haven’t received the same attention. Further, there are many career paths that allow techies to advance AI safety policy beyond technical AI governance research, and my sense is many techies simply aren’t even tracking these.

Things you can work on

There are various lines of work for technical people to contribute to AI safety policy, which I break down into a categorization of (A) technical research directions and (B) career paths.

The category of technical research directions corresponds roughly to the concept of technical AI governance (TAIG), as described in a recent paper titled “Open Problems in Technical AI Governance,” though note my concept of “technical research directions” here is not identical to their concept of TAIG.[5] 

For career paths, I discuss paths that may be helpful for governance efforts. However, in order to avoid being duplicative with the first category of research directions, this category excludes career paths that may allow for pursuing the technical research directions in the first category without offering other major benefits to advancing AI safety policy (e.g., academia). Note that there are some career paths that allow for both pursuing the technical research directions in the first category and for other additional benefits to AI safety policy above the direct research (e.g., think tank jobs), and these career paths are listed in the second category as well.   

Note that you don’t have to read the below clusters in order, nor do you need to read all of them. You should instead feel free to skip around, reading them in whatever order you want.

With those clarifications out of the way, the categories I cover in this piece are, organized by cluster:

  • Technical research directions:
    • Technical Infrastructure for AI Governance:
      • Evals
      • Compute governance mechanisms
      • Information security
      • Technical mechanisms for mitigating policy downsides
    • Strategic AI landscape analysis:
      • Forecasting and other similar analysis
      • Macrostrategy/worldview investigation research
  • Career paths:
    • USG policymaking pipeline:
      • Executive branch jobs
      • Certain congressional staffer positions
      • Traditional think tanks
      • AI-risk focused governance and policy orgs
    • Non-USG policymaking pathways:
      • Government policies in other countries
      • International policymaking
      • Corporate policymaking within AI companies
    • Communication efforts:
      • Tech(-adjacent) journalism
      • Other media engagement
      • More direct stakeholder engagement
  • Other:
    • Support for any of the above (including earning to give)
    • Other things I haven’t considered

 

Technical research directions:

Technical Infrastructure for AI Governance

This category encompasses the development and implementation of technical mechanisms that enable specific governance policies to become workable or more effective. It includes designing methods to evaluate AI systems (enabling more rigorous assessment practices), developing mechanisms to monitor compute or ensure compliance with compute-related policies, improving information security for powerful AI systems, and creating technical solutions that reduce the drawbacks of implementing safety policies. In principle, much of technical alignment research could be conceived in a similar manner (since your theory of change with an alignment plan might involve policies that mandate your alignment technique once it’s sufficiently matured), but here I focus on technical areas that would tend to be neglected by techies who aren’t paying much attention to governance or policy.

Note there are also other areas within this category that I don’t cover. For readers who want to dig deeper than what’s listed here, I recommend reading the recent paper Open Problems in Technical AI Governance, referenced above.[6]

 

Evals

What it is:

Evals (short for “evaluations”) are processes or methods for assessing various aspects of AI systems, such as performance, misuse potential, alignment, and so on. Some examples of possible evals that might be particularly relevant to catastrophic risk include evals into a model’s ability to: increase biorisk (e.g., by walking an aspiring bioterrorist through the steps to construct a novel pathogen), be jailbroken (to disregard safeguards in general), engage in “scheming” behaviors, self-exfiltrate, or lead to substantially faster AI progress via conducting AI R&D.

Why it may help:

If we want policies that target AI systems with certain dangerous properties, we need methods of assessing whether specific systems in question have those dangerous properties. Furthermore, evals for capabilities may play a role in telling us how stringent we need other guardrails to be, with more capable systems requiring stronger guardrails.

Who would be a good fit:

There are various roles involved in crafting and implementing evals (e.g., Research Engineers, Research Scientists, Prompt Engineers, etc) and different evals themselves often require somewhat different skills for conducting. With that said, the following traits would generally be helpful for working on evals (though most jobs wouldn’t require all):

  • Programming experience
  • ML knowledge and/or experience (such as with ML infrastructure or ML research)
  • LLM knowledge and/or experience (including with pretraining, fine-tuning, prompting, scaffolding, jailbreaking, etc)
  • Good experimental design/empirical research chops (e.g., from social science or CS)
  • For certain roles, an ability to turn hard-to-pin-down concepts into clear and meaningful metrics (e.g., metrics for AI R&D abilities)
  • Domain-specific experience (e.g., biosecurity, cybersecurity, etc)

Where you can work on it:

There are several different types of places were you can work on evals:

  • Independent, 3rd party evals orgs, such as METR or Apollo. These 3rd party evals orgs develop and run evals on cutting-edge models.
  • Internally at major AI companies. Most cutting-edge AI companies also run their own evals internally. Note that there is at least a potential for internal evals to be used primarily as safety-washing and thus wind up net negative (of course, in principle that could also be the case for external evals, but the incentives are likely worse for those doing internal evals).
  • At relevant government agencies, such as an AI Safety Institute in a relevant country. Depending on how AI regulation develops, it’s possible evals from governments will carry the force of law, such as by being incorporated into licensing regimes or being a prerequisite for government contracts.
  • Academia and think tanks. Researchers at these organizations can develop and propose new evals or procedures for crafting evals. They can also analyze existing evals, identify limitations, and suggest improvements. Notably, a landmark paper put out by DeepMind about evals included multiple authors with academic or think tank affiliations.

 

Compute governance mechanisms

What it is:

Compute governance mechanisms are technical and policy approaches that leverage the properties of compute (e.g., excludable, quantifiable, detectability of large data centers, concentration in the supply chain) to promote AI governance, such as by enhancing government visibility of AI, influencing which kinds of AI systems are built and by which actors, and ensuring compliance with relevant regulations or standards (see more in this paper). These mechanisms can include hardware-level controls, monitoring of stocks and flows of compute, and regulatory frameworks that govern access to and use of high-performance computing resources.

Examples:

  • On-chip monitoring systems that track compute usage
  • Secure enclaves or trusted execution environments for running sensitive AI workloads
  • Fair and effective principles and standardised protocols for reporting compute usage to regulatory bodies
  • Technical measures to enforce compute-based policies (e.g., on-chip mechanisms for enforcing compliance with export controls)

Why it may help:

Effective compute governance can play a crucial role in AI safety and risk reduction for several reasons:

  • Enhancing transparency: Robust tracking mechanisms can provide better visibility into who is developing advanced AI systems and at what scale, enabling more informed policymaking and risk assessment.
  • Enforcing safety practices: Compute governance can be used to ensure that only AI development projects adhering to certain safety standards or evaluation processes are granted access to certain levels of computational resources.
  • Preventing proliferation: These mechanisms can help control the spread of advanced AI capabilities to hostile or reckless actors by restricting access to the necessary compute.
  • Enabling international coordination: With standardized compute tracking and control systems, international agreements on AI development could be more effectively implemented and verified.

Who would be a good fit:

Individuals with strong technical backgrounds, particularly in hardware engineering and related fields, are well-suited for work on compute governance mechanisms. While some roles may benefit from policy understanding, many crucial contributions can be made purely from a technical perspective. Key backgrounds and skills that would be valuable include:

  • Computer architecture and hardware design
  • Electrical engineering, especially related to chip design
  • Experience with secure hardware implementations (e.g., secure enclaves, trusted execution environments)
  • Distributed systems and cloud computing
  • Cryptography and security engineering
  • High-performance computing

Some roles, particularly those involving the design of overall governance frameworks or interfacing with policymakers, are likely to also benefit from additional policy understanding or experience with policy analysis.

Where you can work on it:

Several types of organizations are involved in developing compute governance mechanisms:

  • Think tanks: Organizations like RAND and CNAS have produced work in this area, and more think tanks may shift in this direction. Organizations specifically focused on AI governance, such as GovAI, have also produced work on compute governance mechanisms.
  • Tech companies: Major AI companies, cloud providers, and hardware production companies may have teams working on compute governance, either to comply with regulations or to adhere to voluntary corporate policies.
  • Government agencies: Entities like the National Institute of Standards and Technology (NIST) or the Bureau of Industry and Security (BIS) in the US, or similar standards bodies in other countries, may develop regulatory frameworks, technical guidelines, and compliance standards for compute governance.
  • Research institutions: Universities and independent research labs may have projects exploring particularly technical aspects of compute governance, such as in the computer science or electrical engineering departments.

 

Information security

What it is:

Information security (a.k.a. infosec) involves developing and implementing methods for ensuring sensitive information stays secure. Infosec most obviously includes cybersecurity, but it also includes physical security and personnel security. In the context of advanced AI, infosec is primarily concerned with preventing the unintentional exfiltration of cutting-edge AI systems or the key insights needed to create these systems. As AI capabilities progress, some infosec approaches may need to adapt and leverage these advanced AI models to enhance security measures.

Why it may help:

Some AI policy proposals, such as software export controls or disallowing the distribution of AI systems in certain other circumstances (e.g., before specific evals are passed), would require good infosec in order to be effective. And more generally, without good infosec, we’ll likely see more proliferation of cutting-edge AI systems/key insights to reckless or hostile actors who may misuse these systems and exacerbate race dynamics. Further, strong infosec may help with preventing AI self-exfiltration.

Who would be a good fit:

For cybersecurity, in addition to good software engineering skills, having a security mindset is a major asset. For other parts of information security (physical security and personnel security), software engineering would often not be relevant, though I’d still expect having a security mindset would be very helpful.

Where you can work on it:

Most obviously, you can work on infosec within major AI companies. As governments become more involved in AI (both testing systems that private entities produce and possibly making their own), there may also be relevant jobs within governments; also, infosec expertise in governments may be helpful in order for governments to craft sensible policies related to infosec. There may further be some valuable infosec research that could be pursued outside of these places, such as in academia or in think tanks such as IAPS or RAND’s Meselson Center.

 

Technical mechanisms for mitigating policy downsides

What it is and why it may help:

Various safety policies have downsides to implementation, and technical fixes that reduce the “cost” of implementation may make these policies more effective and politically tractable.[7] For instance, some governance policies would lead to more restricted distribution of model weights (either intentionally or as a side effect), and restriction of model weights would be expected to harm mechanistic interpretability research and similar fields of research. However, software platforms that offer structured access could enable (some of) this research, even if model weights were restricted. Therefore, building these sorts of software platforms may be beneficial, both because doing so could give society the political option of enacting policies that restrict model weights, and because doing so could reduce the downside to such model weight restriction if this restriction is likely to happen regardless.

As another example, certain strands of privacy-preserving ML research may enable training a model in such a way that the owner of the model can’t see the data, and the owner of the data can’t see the model in training – mature research into this area would also reduce the cost of restricting model proliferation, as it would mean the restriction would still allow for arrangements where one party trains on data they don’t own while the other party has their privacy preserved.

A further example would be technical mechanisms that could better enable audits by reducing the likelihood of sensitive information or commercial secrets leaking during the audit process. These mechanisms could make (mandated) audits more acceptable to some stakeholders, and they would also reduce risks associated with unintentional model proliferation due to leaks originating from audits.

Who would be a good fit:

Technical chops will generally be important in this area, though the specifics would depend on the proposal in question (e.g., some areas would look more like research while others would look more like software engineering). I would additionally expect this area to only be a good fit for people who have a clear understanding of the overall theory of change of how their work reduces the cost of the relevant policy and what the point of the policy itself is; I could imagine there would often be many adjacent areas of work that wouldn’t provide the same benefits, and people without good overall understanding could accidentally slip into working on one of these adjacent areas instead.

Where you can work on it:

This would again depend on the specific mechanism. Some mechanisms could be advanced in academia, others may only be able to be advanced within major AI companies or other organizations (e.g., platforms for structured access may be harder to work on if you’re not in an organization with access to the models in question).

 

Strategic AI landscape analysis

While the above research directions are critical, as are the policy and communications efforts discussed under career paths below, these areas all rely on a solid foundation of understanding the AI landscape and its potential trajectories. Efforts to better grasp the interplay of technological advancement, economics, and other factors can enhance the effectiveness of governance initiatives. This section explores research areas that aim to improve our understanding and inform more targeted and impactful AI governance efforts.

 

Forecasting and other similar analysis

What it is:

This category encompasses research aimed at either forecasting the future of AI or analyzing current relevant factors. This research helps inform AI governance efforts by providing a clearer picture of both the present state and potential future trajectories of AI development. It includes studying topics such as AI capabilities, scaling laws, geopolitical factors affecting AI progress, and potential scenarios for AI advancement. Note that while much forecasting work focuses specifically on timelines, there are many other areas of forecasting as well (e.g., what trajectory AI will take, whether there will be “warning shots,” what the societal reactions will be in various scenarios, etc).

Why it may help:

Understanding the current state and potential future of AI is crucial for crafting effective governance policies. It also provides a foundation for other governance efforts, ensuring that policies and communications are grounded in a solid understanding of the AI landscape, and enabling better prioritization.

Who would be a good fit:

Individuals with strong analytical skills, understanding of AI, a truth-seeking orientation, and the ability to synthesize complex information would excel in this area. Background in computer science, statistics, or related fields is often beneficial. Additionally, an interdisciplinary mindset is valuable, as this work often involves considering the intersection of technology with fields such as economics or geopolitics.

Where you can work on it:

Certain organizations focus heavily on this sort of research (e.g., Epoch) or otherwise focus on it to a large degree (e.g., GovAI). This sort of research can also be pursued in some more traditional think tanks, in academia, or even as an independent researcher.

 

Macrostrategy/worldview investigation research

What it is:

This category encompasses high-level, conceptual research that aims to shape our overall understanding of AI development, its potential impacts, and strategic approaches to governance. It includes developing comprehensive frameworks for thinking about AI progress and its societal implications, exploring scenarios for how AI might develop and integrate into society, and identifying crucial considerations that could reshape AI governance priorities.

Example works:

  • Eric Drexler’s piece on Comprehensive AI Services, which presented a novel framework for thinking about advanced AI systems as collections of specialized services rather than as unified AGI agents
  • Nick Bostrom’s book Superintelligence and Eliezer Yudkowsky’s Intelligence Explosion Microeconomics, which explored many ideas that have become foundational in AI risk
  • Ajeya Cotra’s Bio Anchors Report, which developed a framework using arguments from biology to estimate how long until transformative AI
  • Tom Davidson’s Takeoff Speeds Report, which analyzed how increasingly powerful AI systems performing AI R&D could lead to AI progress speeding up in a positive-feedback loop

Why it may help:

This type of research can fundamentally alter how we approach AI governance by providing new paradigms or challenging existing assumptions. It helps in identifying blind spots in current thinking, exploring neglected scenarios, and developing more robust and comprehensive strategies for addressing AI risk.

Who would be a good fit:

Individuals well-suited for this work typically possess a rare combination of skills and traits, including strong abstract reasoning skills, the ability to think creatively about complex systems, and (due to the general dearth of mentorship for this type of research, as well as the open-ended aspects of the research) an ability to stay self-motivated in uncharted intellectual waters. Understanding of both AI and broader societal dynamics (e.g., economics) is helpful, though more important than formal training in these sorts of areas is probably an ability to think in interdisciplinary terms (e.g., “think like an economist”) even without training. Technical chops are also helpful, as some of this work has a heavy empirical aspect. Further important traits include intellectual curiosity, the ability to challenge established paradigms, and comfort with ambiguity.

Where you can work on it:

For individuals who are a good fit for this type of research, you could plausibly perform it at a bunch of different places. Historically, much of this research has occurred at the Future of Humanity Institute (which no longer exists) and at Open Philanthropy (which may or may not be hiring for work in this area – I have no inside information here). Other opportunities for this sort of work may exist at AI safety orgs, think tanks, or academic institutions, especially if you have a secure position with research latitude, such as a tenured professorship. Alternatively, people interested in this research could perform it as an independent researcher. Ultimately, the rarity of skills required to be a good fit for this sort of work means that for those who are a good fit, opportunities may be created or arranged in various contexts.

 

Career Paths:

While the previous section focused on technical research areas, this section explores specific career paths where you can advance AI safety policy. Some of these roles involve directly shaping the development and implementation of AI policies, while others involve helping build necessary understanding about AI policies or about AI more broadly.

USG policymaking pipeline

If new laws and rules are going to be written, someone is going to have to write them. Currently, these rules are being written by people who, for the most part, don’t have substantial technical background. In many instances, having more people with technical backgrounds would be helpful, in particular to grapple with the technical bits of the rules. For instance, having expertise in hardware could be helpful when trying to set a FLOP threshold for certain regulatory action, and understanding the fact that fine-tuning can occur for a small fraction of the compute required to train a model can be helpful for deciding what to do about open source models.

There’s a whole pipeline here, going from “overarching idea” to “specific implementation” where work needs to be done. I mentioned some policy proposals in the introduction, and further examples of relevant policies could be things like a licensing regime or more expansive liability.

For many policies, there would be clear synergies between this cluster and the one on technical infrastructure for AI governance – policies could involve, for instance, mandating certain evals in certain circumstances. Familiarity with the technical mechanisms in the technical infrastructure section is therefore often helpful for policymaking, and people with technical backgrounds would likely be able to gain familiarity with these mechanisms relatively easily.

There are several different types of organizations where you can work on policy development, and the place where you work will likely affect where in the pipeline you’re operating.

 

Executive branch jobs

Examples and what sorts of work:

Several parts of the executive branch are likely to be taking actions relevant to AI risk, and it’ll be important for those parts to be staffed by people who are technically competent and who understand the risks. Some examples of areas within the executive bureaucracy where this is especially likely to be the case are:

  • The AI Safety Institute (AISI) or other areas of USG tasked with evaluating AI systems: AISI has been involved in pre-deployment testing of frontier AI systems. While it’s unclear whether AISI or another part of government will wind up responsible for this testing as we look to the future, whatever part does will be relevant.
  • The Bureau of Industry and Security (BIS): The BIS is responsible for issues at the intersection of national security and advanced technology. Among other things, they enforce export controls (such as the export controls on advanced semiconductors) and assess the national security implications of emerging technologies.
  • The Office of Science and Technology Policy (OSTP): OSTP advises the President on scientific and technological matters and may be very influential for how AI is governed.
  • Various parts of the natsec and intelligence apparatuses: As AI heats up, and especially if it becomes increasingly securitized, these apparatuses may hold outsized sway over the way the US governs AI.
  • Many other areas: Listing out all the areas within the executive branch where technical people can help with AI risk would be too long to put in this piece. Given the parts of the executive branch which are likely to have jurisdiction related to AI risk and which have historically been influential in related areas, I’ll note that there’s likely to be a fair bit of relevant work at the Department of Commerce and the Department of Energy (in addition to the areas mentioned above). Note also though, due to how dynamic and uncertain everything is right now, the most helpful areas within the executive branch to work may change over time.

How the work fits into the policymaking process:

The executive branch is tasked with making and implementing all sorts of rules, after being delegated the authority to do so from legislation. Given that AI is a technical and fast-moving area, and that, as noted above, there’s substantial uncertainty about how best to regulate it, the executive bureaucracy will likely play a substantial role in crafting and implementing the specifics of relevant rules.

Who would be a good fit:

As a general rule, you don’t have to be into politics to work for the executive branch, but you do have to be willing to put up with a large amount of bureaucracy. People who are good at playing “bureaucratic politics,” for lack of a better term, are likely to do better within these roles. Under the current administration, individuals who are partisan Democrats or otherwise are anti-Trump may have a harder time getting a job within the executive branch.

For most roles, much of your work would likely not be relevant to AI risk, so it would generally be helpful to be the kind of person who can stay motivated in that sort of situation. The flip side is that the work that does relate to AI risk could be very impactful. Many executive branch roles cluster around DC, but there are also a large number outside of DC, as some agency offices exist in other parts of the country.

Note that executive branch positions often have relatively stringent background checks – most positions require you to be a US citizen, many positions require obtaining a security clearance, and most jobs prohibit hiring anyone who has used illegal drugs in the previous 12 months (including marijuana, which remains federally illegal).

 

Certain congressional staffer positions

Explanation:

Some particular staffer positions are disproportionately influential on AI, and people with technical backgrounds will be more likely to excel in these positions. For instance, you could become a congressional staffer to a Congressperson who sits on a committee that gives them jurisdiction over (some part of) AI, such as the commerce committees, the appropriations committees, or on one of the intelligence committees. When considering various members of Congress to aim to be a staffer for, you should additionally consider factors such as how engaged the Congressperson is on AI (e.g., based on previous statements they’ve made and bills they’ve supported).

Additionally, congressional committees themselves have their own staffers (not counting staffers who serve Congresspersons on the committee), and you could become a committee staffer to a relevant committee.

How the work fits into the policymaking process:

Members of Congress have their time and attention spread thin across many issues, and they rely on their staff to develop expertise in legislative areas, keep the Congressperson informed in the area, draft relevant legislation, and so on. Each of these congressional staffers would typically be responsible for multiple areas of legislation. As someone with a technical background, which is somewhat uncommon among staffers, you’d likely be put on areas that have more to do with technology, including AI.

Committee staffers, meanwhile, spend their time giving policy expertise, drafting legislation, researching, and so on, for areas related to their committee.

Who would be a good fit:

To be a good fit, you would generally need to have at least adequate social and political skills, though you would not need to be particularly charismatic or anything like that (the way elected officials do). You would also need to be okay with operating in an environment where most everyone is a political partisan, and it would be difficult to work as a staffer for a Congressperson who you don’t generally align with on political/partisan terms. For these positions, you also would need to live in DC.[8]

Congressional staffer positions typically don't have the same sort of background checks for things like prior marijuana use as executive branch positions, and Congressional offices tend to have more flexibility to set their own hiring policies. Being a US citizen is still generally required, though.

 

Traditional think tanks

Examples:

Think tanks like RAND, CSET, and CNAS have been focusing attention on AI in recent years. Assuming AI continues to increase in impact and salience, more think tanks will likely follow.

What sort of work:

As a few examples of the sort of work these think tanks produce:

How the work fits into the policymaking process:

Policymakers are often busy and spread thin across different issues, especially in Congress, which has far fewer staff than the executive branch. Many policymakers will therefore rely heavily on think tanks to develop policies. Often, policymakers will adopt ideas put out by think tanks with little modification.

Who would be a good fit:

Perhaps surprisingly to many people with tech backgrounds, you don’t actually need to have experience working in politics to work at a think tank, nor do you need a background in polisci or a related field. You don’t even have to be a US citizen for most positions (though some positions do require this, as is generally the case with positions requiring a security clearance). Most positions would require being in-person in DC, but there are exceptions (either for remote work or for think tanks with offices in other cities).

Insofar as your work touches on technical issues, having a technical background will be a large plus, and many think tanks struggle to find good hires with technical backgrounds. Note that the bar for what constitutes a “good technical background” will generally be lower for these sorts of positions than for doing object-level technical work (e.g., you’ll likely be fine if you have a BS degree in CS with a couple classes in AI, or similar knowledge through other avenues, such as having worked at an ML startup for a couple years).

Many roles that help you get your foot in the door in DC, especially working on tech policy, will not be squarely focused on AI, yet are still really valuable for building career capital as well as for gaining context and connections.

 

AI-risk focused governance and policy orgs

Examples:

There are several governance and policy organizations that focus more on AI risk specifically, such as GovAI, CAIS, and IAPS.

What sort of work:

Much of the work at these sorts of organizations is similar to the kind of work mentioned above at more traditional think tanks, though these AI-risk focused orgs also often have work that exists somewhat earlier in the policymaking pipeline, covering topics that may be less fleshed out. Some of the research at these organizations would involve investigating questions like “Are compute thresholds a good way to do compute governance?” while other questions would be on more concrete issues like “What would be a concrete policy to tackle XYZ challenge, and which part of the government would have the authority to implement this solution?” Many of these orgs also do other work besides participating in the USG policymaking process that would fall in other categories of this piece (e.g., communications efforts or strategic AI landscape analysis).

How the work fits into the policymaking process:

Again, proposals from these organizations can end up being adopted by policymakers. Most of these orgs don’t have the same level of relationships with policymakers that, say, RAND does, but ideas from these orgs can still make their way across the desks of policymakers, sometimes on their own and sometimes after one of the more traditional think tanks picks up an idea and builds on it first.

Who would be a good fit:

There isn’t really a one-size-fits-all background that’s required here, and people can span the spectrum from technical to non-technical work. For some work at these AI-risk focused orgs, more generalist skills will be more valuable, while for other work at these places, the skill set required is likely similar to that for working at more traditional think tanks.

For work that’s more similar to that at traditional think tanks, personal fit could be high for someone to work at both a traditional think tank and an AI-risk focused org, and many people should be applying to jobs at both, though some people will still have higher personal fit at one or the other. AI-risk focused orgs are likely a better fit for people who want to focus more exclusively on catastrophic risk or who can’t/don’t want to move to DC, while traditional think tanks are likely a better fit for people who want to build career capital to later transition into government.

 

Non-USG policymaking pathways

The USG is not the only entity in the world that will craft policies relevant for AI risk. Working to craft or enact policies within other relevant institutions can also help reduce AI risk, and for many of these policies, a technical background is helpful.

 

Government policies in other countries

Explanation:

Countries besides the US are still relevant to AI policy, both because their policies may be directly relevant for reducing AI risk (in particular if their domestic AI industry is one of the best in the world, like the UK, or if they occupy a crucial node in the semiconductor supply chain, like Taiwan or the Netherlands) and because their policies may contribute to global norms on AI policy. For most countries, if you want to influence policy, you can do so within any of: the executive branch, the legislature, or in outside organizations that work closely with the government (like think tanks). Several countries have announced (plans for) the creation of a domestic AI Safety Institute, and working at one of these AISIs may be an impactful way to advance AI safety policy from these countries.

Who would be a good fit:

This will depend a lot on specifics to the country and role, though people with a more technical background will generally have more of a leg up in roles that are more clearly meant for domain experts (e.g., generally more in executive branch positions than in legislature positions). As a rough approximation, the above section on USG policymaking pipeline lists traits that would make someone a good fit for various roles in the US context, and roles in other countries will often require similar traits to analogous roles in the US, though this is not always the case. Of course, in most countries, being a citizen, speaking the language, and being familiar with the local culture are all important for jobs that influence policy. Note that in some countries, technical expertise is rare and tech policy jobs are uncompetitive, meaning someone from that country with technical expertise could potentially have a large influence on AI policy if they go that route.

 

International policymaking

Explanation:

International organizations, fora, and networks are likely to play a role in shaping global AI governance. These include bodies like the United Nations (particularly UNESCO and ITU), the EU, the G7, and the Global Partnership on AI (GPAI). It’s also plausible that the nascent network of AI Safety Institutes will wind up playing a large role in coordinating international efforts on AI governance.

Working within or advising these organizations can help establish international norms, standards, and agreements on AI development and use. This work is particularly relevant for addressing global coordination challenges in AI governance and for ensuring that AI safety measures are adopted widely.

Examples of plausible work:

  • Developing international AI guidelines or rules (such as with the EU AI Act)
  • Coordinating multilateral efforts on AI governance (e.g., through G7 or GPAI initiatives)
  • Advising on technical aspects of international AI agreements or treaties

How the work fits into the policymaking process:

While international organizations don't typically have direct regulatory power over individual countries (the EU being an exception), they significantly influence national policies and global norms. Their recommendations and frameworks often serve as blueprints for national AI strategies and regulations. Additionally, these organizations provide platforms for international dialogue and cooperation on AI governance issues.

Who would be a good fit:

People with a combination of technical AI expertise and diplomatic or international relations skills would be particularly well-suited for these roles. Specific traits and skills that would be beneficial include:

  • Technical understanding of AI and understanding of AI risk
  • Familiarity with international relations and diplomatic processes
  • Understanding of geopolitical dynamics related to AI development and deployment
  • Ability to communicate complex technical concepts to non-technical audiences
  • Cross-cultural competence and language skills
  • Experience in policy analysis or development
  • Patience, as international policymaking tends to be a slow process

Many of these positions would require working in locations where international organizations are headquartered, such as Paris, Geneva, or New York. However, there may also be opportunities for remote work or for serving as a technical advisor while based in your home country.

 

Corporate policymaking within AI companies

Explanation:

Major AI companies play a significant role in shaping the trajectory of AI, and their internal policies, guidelines, and other practices can have significant impacts on AI risk. Working within these companies to influence their policies and practices may allow for reducing risks. Note that, similar to running evals internally at major AI companies, there’s a possibility that working on corporate policymaking within these companies could be net negative by enabling the company to safety-wash dangerous behaviors.

Examples of relevant work:

  • Creating and enforcing responsible AI development frameworks
  • Shaping company policies on issues like model deployment and research publication
  • Advising leadership on potential risks and mitigation strategies
  • Collaborating with external stakeholders (e.g., policymakers, academics) on AI policies

How the work fits into the policymaking process:

While not "policymaking" in the traditional governmental sense, corporate policies can have immediate and direct effects on the most advanced AI systems being developed. These policies can also influence industry standards and public policy discussions. Moreover, as governments look to regulate AI, they are likely to consult with or draw inspiration from practices within leading AI companies.

Who would be a good fit:

People best suited for these roles would generally have a blend of strong technical AI expertise, an understanding of policy and business considerations, and a combination of principled behavior and interpersonal savvy. Specific traits and skills that would be helpful include:

  • Deep understanding of AI and familiarity with key concerns within AI risk
  • A track record in AI governance or in policy analysis or development
  • Good judgment about tradeoffs
  • Strong sense of personal conviction and ability to maintain independent judgment in a high-pressure environment where social pressures such as groupthink may be present, while still collaborating productively within teams
  • Good corporate social intelligence, including the ability to navigate complex organizational structures
  • Willingness to engage in potentially challenging discussions about company directions and practices in instances where doing so would be more likely beneficial than detrimental

These positions would typically require working at the headquarters of major AI companies, often located in tech hubs like the Bay Area, though some remote work options may be available.

 

Communication efforts

Communication efforts play a key role in advancing AI governance by bridging the gap between technical experts and policymakers, as well as informing the broader public about AI risks and potential interventions. Effective communication can help shape public opinion, influence decision-makers, and create a more informed discourse around AI safety. The following subsections explore various avenues through which technically skilled individuals can contribute to these communication efforts

 

Tech(-adjacent) journalism

Examples:

Tech outlets like WIRED often cover AI, as do tech or AI verticals or columns within more traditional outlets such as Vox or the NYT. As AI becomes a more prominent issue, it’s likely we’ll see an increase in journalism roles that cover it.

Who would be a good fit:

Obviously good writing skills are important for journalism, and it’s particularly important to be able to write clearly and quickly. With that said, many techies overestimate how much of a wordsmith you need to be to become a journalist. For tech journalism in particular, while you do need to be able to explain technical concepts simply to a lay audience, you don’t necessarily need exquisite prose. And many media outlets are very starved for technically competent people, so if you are technically knowledgeable and your writing is decent, you may have a shot at having an impactful career as a tech journalist, even if you don’t consider your prose to be amazing. People interested in advancing AI safety by pursuing tech journalism should consider checking out the Tarbell Fellowship.

 

Other media engagement

Explanation:

Beyond traditional journalism, there are various other media platforms where techies can contribute to the discourse on AI governance and safety. In particular, these other platforms offer opportunities for experts to contribute in an impactful way in a one-off or periodic fashion instead of as a full-time job.

Examples:

  • Writing op-eds for major outlets
  • Providing expert quotes or interviews for news articles
  • Appearing on television news segments
  • Participating in podcasts or on radio shows

Who would be a good fit:

Individuals best suited for these roles typically possess a combination of deep technical knowledge, strong communication skills, and legible signals of expertise. Specific traits and abilities that would be beneficial include:

  • Expertise in AI and AI risk
  • Ability to explain complex technical concepts in simple, accessible terms
  • Capacity to distill nuanced ideas into concise, impactful statements
  • Comfort with public speaking and thinking on your feet, and ability to maintain composure under pressure (for live interviews; not necessary for writing op-eds)
  • Having undergone media training, and familiarity with the nuances of journalisms[9] (for anything involving interactions with journalists)

  • Familiarity with contemporary discourse around AI, and how your position relates to the public discussion more generally
  • Traditional credentials on AI (such as being a CS professor), or other legible signals of prestige on the topic

 

More direct stakeholder engagement

Explanation:

Certain stakeholders hold particularly large influence on AI policy, and efforts to engage with these key stakeholders in a targeted manner can have outsized influence. For instance, directly briefing policymakers, advising industry leaders, or holding discussions with influential academics can shape important decisions and strategies related to AI governance. This form of engagement allows for more nuanced and in-depth discussions than broader communication efforts.

Examples of relevant work:

  • Providing technical briefings to legislators or their staff on AI capabilities and risks
  • Advising corporate boards or C-suite executives on responsible AI development
  • Participating in closed-door roundtables or workshops with key decision-makers
  • Offering expert testimony at legislative hearings
  • Engaging with standards-setting bodies to shape technical guidelines for AI

Who would be a good fit:

Individuals best suited for direct stakeholder engagement typically combine deep technical expertise with interpersonal and communication skills, and they further often have relevant social or professional networks that give them access to these stakeholders. Key traits and abilities include:

  • Strong understanding of AI and AI risk
  • Ability to communicate complex technical concepts to non-technical audiences
  • Access to a strong network in the relevant area
  • Diplomatic tact and the capacity to navigate sensitive political or corporate environments
  • Credibility within the field, often demonstrated through academic or other legible credentials
  • Ability to tailor messages to different audiences and to adjust based on the context

This sort of direct stakeholder engagement typically works best when tied to an intentional and larger effort, or when done individually when you have a strong personal relationship with the stakeholder. By contrast, “random” individual attempts at direct stakeholder engagement, such as simply writing a letter to your representative on your own, are less likely to be impactful.[10] 

Note that, as a technical person, you may be able to help a larger effort considerably even if you lack some of the above traits (such as a network within the space), assuming that others in the effort are able to cover these areas. Some efforts may also allow for technical people to support the effort without engaging the stakeholder personally. For instance, creating a tech demo (e.g., of jailbreaking LLMs or of automated hacking) could be a useful demonstrative tool for those engaging key stakeholders. For technical people in this sort of role, diplomatic and communications skills would no longer be particularly important.

 

Other:

The categories we've discussed so far include many ways technical people can help with AI governance, but they don't cover everything. This cluster looks at other ways techies can help.

 

Support for any of the above (including earning to give)

What it is:

This category includes various supporting roles that enable and enhance the effectiveness of the work described in previous sections. These roles might involve project management, research assistance, data analysis, software development, or other specialized skills that contribute to the success of AI governance initiatives. Further, activities such as grantmaking, mentoring, advising, and so on enable more direct work to occur and increase its quality.

Another very important aspect of support for many of the above efforts is financial support (e.g., from people earning to give), as major philanthropists and grantmaking institutions are often poorly positioned to fund some of the above efforts, and people with tech backgrounds can often help a lot by stepping in, given they often have high earning potential.

Why it may help:

Supporting roles can significantly amplify the impact of core AI governance efforts. They help streamline processes, improve output quality, and allow specialists to focus more on their areas of expertise. Activities like grantmaking can direct resources to the most promising projects, while mentoring and advising can help develop new talent and refine strategies. And earning to give for the above efforts may be more helpful than often assumed, because many of the above areas are highly funding constrained.

Who would be a good fit:

This depends a lot on the specific supporting activity, but for many of these roles, strong organizational skills, attention to detail, and the ability to work well in interdisciplinary teams would be valuable. For mentoring and advising, individuals with significant experience in relevant fields and good communication skills are ideal, as is good judgment. For earning to give, high earning potential is a large positive.

Where you can work on it:

Many organizations mentioned in previous sections have openings for these supporting roles. Philanthropic organizations focused on AI safety often need people for grantmaking. Experienced professionals in the field may find opportunities for mentoring or advising through formal programs or informal networks. Additionally, there may be opportunities to provide freelance or contract-based support for various AI governance projects, or to work for an organization specializing in providing support.

 

Other things I haven’t considered

This category is a catch-all for approaches that either don’t fit nicely into any of the above categories or that I’m unaware of. Note that approaches in this category may be more neglected than approaches I am aware of, so (at least in certain circumstances) they may be more impactful. Further, note that both AI and AI governance are dynamic and fast-moving fields; the further you read this piece from the time of writing of this piece (mid-2024 to early-2025), the more likely you should assume that there are other approaches which have opened up.

 

Conclusion

This piece outlines a range of opportunities for technically skilled individuals to contribute to AI governance and policy. From developing crucial technical mechanisms and researching the AI landscape, to engaging in policymaking or communicating complex ideas, there are many ways to apply technical expertise to this field.

AI governance is complex and rapidly evolving, requiring interdisciplinary approaches that blend technical knowledge with policy understanding. As AI capabilities advance, the need for informed and effective governance becomes increasingly urgent. Technical experts are uniquely positioned to bridge the gap between technological realities and policy requirements, helping to craft more robust and effective governance strategies.

It's worth re-emphasizing that many of these roles do not require extensive political backgrounds, involvement in partisan politics, or the sort of charisma or other social skills typically associated with success in politics. Instead, they leverage technical skills and analytical thinking to address complex challenges in AI safety and policy.

For those interested in contributing, your next steps should involve identifying areas where your skills align with governance needs, researching relevant organizations, and potentially upskilling in complementary areas.

Acknowledgement:

I wrote this piece as a contractor for Open Philanthropy. They do not necessarily endorse everything in this piece (though they are excited about technical AI governance, generally speaking). I would like to thank Julian Hazell for supervising this project and providing helpful feedback.

  1. ^

     For instance, Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence; the EU AI ACT; SB1047; the Romney, Reed, Moran, and King Framework for Mitigating Extreme AI Risks; etc

  2. ^

     To be clear, my point isn’t to criticize the executive order for passing the buck in this manner; it’s perfectly reasonable for the President to delegate these specifics to the agencies. My point is just that someone will need to actually figure out the technical specifics at some point, and no one has.

  3. ^

     Again, I’m not criticizing the approach of SB1047; liability has an economics argument behind it, and there’s legal precedent in other areas to help inform what constitutes “reasonable care.” My point is just, again, that people other than those who drafted the bill language would have had to figure out the specifics of safety practices, and no one now has a crystal clear idea of what these practices should be.

  4. ^

     Meanwhile, governments debate what policies should apply to open source AI, and they’ve debating everything from whether they should try to restrict open sourcing specifically to whether they should exclude open source AI from other restrictions. So it actually matters that those debating these rules understand what the term refers to.

  5. ^

     The paper in question discusses technical directions useful to AI governance writ large (i.e., including safety, but also issues such as fairness, privacy, environmental impacts, etc), versus I’m focusing this piece only on directions particularly relevant for mitigating catastrophic risks, specifically. Additionally, that paper includes a category of “operationalization” under TAIG, which they describe as translating principles and governance objectives into concrete procedures and standards; in this piece, meanwhile, I place somewhat similar ideas into the category of “career paths” (specifically, related to policymaking) instead of “technical research directions.”

  6. ^

     You can also find an associated living repository of open problems here, which seems to be maintained by the paper’s leading authors, and which is both searchable and states it will be updated over time as the field progresses.

  7. ^

     Some readers may notice this category of interventions has a parallel with technical research into reducing the alignment tax.

  8. ^

     Members of Congress actually do have staff that reside in their home district/state instead of DC, but these district/state staffers work on things like constituent services instead of legislation.

  9. ^

     E.g., what it means for something to be “on the record” vs “off the record”

  10. ^

     Not that I think randomly writing letters to your representative is generally counterproductive to your policy goals, I just don’t think it really moves the needle.

41

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

I think that advocacy is much more needed than additional technical effort here - see this.

Curated and popular this week
Relevant opportunities