Summary
AI safety is constrained on talent in many ways, but the reasons behind the constraints vary between types of talent. This post is based on all posts and documents I could find from the past ~ 3 years related to hiring needs and talent pipelines, which I have listed in this document.
Technical research talent - we have strong talent pipelines delivering young researchers to the field, but we are constrained on more senior researchers who could mentor the juniors, e.g., as research managers. We might also need more talent to work on AI security.
Policy and governance talent - there is some infrastructure bringing talent to work on policy and governance, but it’s not as robust as for technical research. The information about what is needed in this sub-field is quite scarce, but some data points to the need for talent that combines technical and policy skills, and for experienced political operators.
Generalist talent - finding candidates for operational and fieldbuilding roles is challenging across the field. In particular, talent with deep context of AI safety and senior operators are scarce. There are few talent development programs bringing skilled generalists to the field.
Grantmaking talent - just a few dozen grantmakers are working on AI safety worldwide, and their capacity is currently not sufficient for the needs of the field. It is difficult to hire for grantmaking positions, and they are often filled with quite junior talent, bringing little professional experience. It’s not clear whether this is an optimal solution.
Leadership and founding talent - while anecdotal knowledge points to difficulties with finding leaders and founders, there is no systematic data that would prove that correct or incorrect. Given the scarcity of senior talent, we probably do need more leaders.
There are many gaps in published materials, leaving significant uncertainties about the current shape of the talent landscape. In many cases, the only available sources are outdated or unreliable.
Special thanks to Anna Karpierz, Enisa Ismaili, Gaurav Yadav, Joseph de Wolk, Julian Hazell, Neav Topaz, Ryan Kidd, Sam Smith, Trevor Levin, and Uladzislau Linnik for feedback on this post.
Introduction
In late April 2026, I collected all the posts and documents I could find about the talent landscape, hiring constraints, and talent pipelines in AI safety. Based on them, I summed up the hiring situation in the main categories of roles. This post is the outcome. I intend it to be used as a point of reference for people thinking about an AI safety career, and organisations and university groups planning their strategy.
Not all of the posts I’ve found are included in the final version: I have skipped some if I believed their content was already covered in other resources, or that the information provided was not very reliable/relevant to the topic, or it might be outdated, etc.
Some information was sourced in other ways than through reading. In particular:
- I recently conducted a series of 5 interviews with senior operations professionals (the results of which have not been published). I refer to some opinions from the interviews in this post.
- I talked to several people working in various positions in AI safety while writing the post to get their perspectives.
I did my best to make each section of the post comprehensive, but since the extent of available information that I have access to varies significantly between categories of roles, some of them provide more insights about talent pipelines, others more about hiring constraints or needs for specific skillsets, and they all leave a lot of open questions.
Technical positions
Talent pipelines in AI safety seem to be strongly oriented towards producing research talent, and there is a wide range of research fellowships available. These fellowships are extremely selective: many of them have acceptance rates below 3% (my own not-yet-published research; coming soon), which makes them more competitive than Ivy League schools. Despite that, 2000-2500 fellows are expected to graduate from these programs in 2026 alone. These are remarkably high numbers for such a young and small[1] field, even if research skills are very much in demand within it. Interestingly enough, many fellowships tend to accept fewer participants than they have capacity for.
While the high selectivity is partially by design, it poses a genuine risk of losing many promising people who could contribute to making AI go well. Recruitment processes in AI safety leave a lot of space for improvement, and even the best candidate assessment methods are just moderately correlated with job performance. Therefore, many of the most promising candidates might be lost due to quite random reasons. This seems even more likely in the case of fellowships that receive thousands of applications per cohort (there are a few such fellowships in the ecosystem), resulting in a huge workload for the programs’ teams[2]. This means that people who review applications have very limited time to spend on each, which might lead to lower quality of the selection.
Additionally, even if two programs seem to be quite similar to each other (e.g., Astra and MATS), they might have different admission criteria, and a profile that is a great match for one might not be a match for the other. While I am aware that in some cases such candidates can get a recommendation for another program after they are rejected from the one they applied for, there is probably a lot more that could be done to retain them. Some loose, low-cost ideas:
- the rejection emails could include a short list of similar programs that currently are or soon will be accepting applications,
- for candidates that met the bar, but were still not accepted (e.g., due to limited capacity of the program or its specific mentors): let the rejected person know that this was the case, and highlight that they are likely a promising candidate for the next edition of this program and/or another similar program,
- the candidates who almost made it to one of the programs could be invited to apply directly to the second round of the recruitment process for other programs (e.g., move straight to an interview, skipping the regular application process),
- organisations having a more regular exchange of applications or using a common database of the candidates who were very promising but still got rejected (note that on many occasions, the application process allows the candidates to tick the relevant box to agree to their data being shared with other potential employers; we could likely use that a bit more than we do).
Even though fellowships seem to be the default way for researchers to enter AI safety, we have little information about the proportion of participants of technical research fellowships who go on to build their careers in the field. MATS reports that 80% of their alumni who participated in the program before 2025 went on to work on AI safety, and 10% founded their own AI safety organisations; their report from 2024 points to a significant counterfactual impact produced by the program. But MATS is quite prestigious and it might be more successful at placing their participants in jobs than most other fellowships. This post by Chris Clay sums up information about alumni of several technical and governance programs, and it also suggests a number around 80%; however, I would be quite cautious about interpreting that number, and the author himself points to several reasons why it might not be very reliable. I think it’s quite unlikely that the actual proportion of fellows from all late-stage fellowships who stay in AI safety is significantly higher than 80%, and I see quite a few reasons why it might be lower than that, in particular for short and/or remote programs. The admission criteria also matter: if a fellowship accepts many participants who are not strongly mission-aligned, they might be less likely to stay in AI safety afterwards (but there is also more space for counterfactual impact, if they decide to stick to the field).
While the fellowships provide huge numbers of talent and many of the participants get hired, they also come with a specific type of difficulty: many of the fellows are quite junior and as they enter the field, they need mentorship support from more senior researchers, who are quite difficult to find. This topic has been explored quite deeply in this fantastic post by John Teichman (MATS). Based on a series of interviews with hiring managers, John concluded that the scarcity of experienced researchers forces organisations to be hyper-selective in hiring, and specifically to select for the ability to work independently and to provide meaningful contribution without much supervision. At the same time, recent developments in AI have been shifting talent needs towards more seniority, which contributes to the current constraints, and suggests that the need for junior talent will likely be lower and lower. All these factors point towards a higher need for senior research talent who could conduct research independently and mentor more junior colleagues. This was also confirmed in research conducted by BlueDot.
Among these senior research-related positions, research managers seem to be particularly difficult to find, even though becoming one can help generate a lot of impact via the multiplier effect. Importantly, you don’t need to be a great researcher to be a great research manager, which means that some people who plan to work on research directly could actually produce more impact through managing other researchers. These people could also consider a wide range of other positions where their skills would be more useful.
As a separate, specific category of technical work, I think it’s worth mentioning AI security. In this survey from 2023, information security experts were often mentioned as a particularly needed professional group. This recent analysis showed that there is still a significant demand for information security skills in the whole field of AI safety; most notably, nearly a half of positions in AI labs posted in the 80,000 Hours job board required this skillset. This is not much of a surprise, given that the state of security at frontier labs leaves much to work on. But some information security expertise is also needed outside of labs, e.g., in regulatory agencies and other governance-relevant institutions. And yet, there are very few programs that support talent development for AI security. Starting more such programs would likely be quite an impactful project, but the impact depends on how the cybersecurity work landscape will evolve in the near future, given Mythos’s capabilities (I would really appreciate someone making a good forecast on that).
Overall, my current impression about the research talent pipelines is that additional talent development programs would still be pretty impactful, as long as they target more senior talent pools or produce talent who could fill remaining hiring gaps.
Important caveat: in many posts related to hiring needs, the specific meaning of what “research” means is unspecified. While it typically seems to mean technical research, and that’s usually what I assumed, in some cases it might also include e.g., legal / policy research. Please bear that in mind when interpreting the numbers.
Policy and governance
Policy and governance is quite a broad field full of quite diverse work, including but not limited to conducting legal or policy-relevant technical research, engaging in political advocacy and lobbying, working on compliance or writing internal policies at frontier labs, and supporting cooperation between stakeholders. Since I found it quite unobvious what skills and characteristics might be required in governance-related roles, here’s a quick list based on five posts related to the topic:
Deep understanding of the relevant political system and its institutions: knowing how the government works, what agencies exist and what they do, familiarity with regulatory mechanisms, ability to navigate bureaucratic processes, and overall just knowing how the sausage is made.
Social skills: the ability to build relationships across the political spectrum and cooperate with disagreeing parties, managing stakeholders, and leveraging your network to reach the right people.
Communication skills: writing memos and reports, explaining complex technical issues to non-expert audiences, talking to various stakeholders, and clearly communicating ideas.
Strategic skills: a good feeling of political timing, the ability to operate in complex political environments, accept partial wins, and deal with frustrating processes; understanding the net value of different policies, and “the skill of generating, structuring, and weighing considerations that matter for the usefulness and feasibility of some policy action.”
Combined technical and policy background: familiarity with control, evals, information security, forecasting AI development, developing technical standards, etc., and applying that expertise to policy work.
Domain expertise: deep knowledge of one of the critical areas of AI policy work, such as AI hardware, information security, Chinese AI industry, etc.
Good epistemics: the scout mindset, weighing evidence appropriately, understanding of statistics and quantitative analysis, strong focus on impact, attention to detail, analytical thinking, and the ability to deal with uncertainty.
Executive skills: ability to work autonomously in a highly stressful, demanding, and fast-paced environment; dealing with tight deadlines; planning and managing implementation of policy actions; agility.
Needless to say, most roles in governance or policy work will not require all of these characteristics. Nevertheless, the diversity of requirements might make it challenging to attract and develop promising talent for this sector of AI safety work. Since some of the most desired skills are quite difficult to acquire through reading, hands-on experience in working with government institutions through internships is often expected.
There are several additional challenges in hiring for governance roles. First, they are often location-specific. Talent working on governance in the US will likely have to live, or at least spend significant amounts of time in, Washington DC; talent focused on EU governance work will probably have to be based in Brussels. Second, for some roles in public institutions, the relevant citizenship and/or a clearance might be required. And third, some positions in governments might require specific credentials, such as a policy or law degree. This significantly narrows down pools from which candidates can be sourced.
What types of governance/policy talent are the most desired by hiring teams? This post by Sam Clarke in 2023 suggests researchers who could develop some valuable research agenda (included but not limited to policy development) were needed across NGOs, think tanks and labs. Additionally, some lab teams were interested in hiring people who could support the implementation of policy actions. While this is by far the most comprehensive public post about the topic that I’ve managed to find, the situation might have changed significantly over the past three years. Some newer but unpublished data points to a high need for people who combine technical understanding with policy knowledge, as well as for experienced political operators. Having said that, there is surprisingly little public and up-to-date information about hiring needs in governance organisations, and it’s difficult to make confident takes. Out of all sections of this post, I have the most uncertainty about the governance/policy work landscape.
I have managed to identify a total of roughly 15-20 fellowships within the AI safety x-risk community[3] that prepare talent for governance and policy roles. While most of them received an abundance of applications, at least a few accepted fewer candidates than they had capacity for, which might be a signal of difficulties with finding applicants that meet the basic requirements (based on my own, soon-to-be-published research). It’s quite difficult to estimate how many fellows finish the programs each year; my best guess would be in the mid-hundreds. The dynamic seems to be quite similar to the technical research category: the bar is high, but we might be losing many promising candidates who could meet it, and we can probably do something to reduce this loss.
Ops and fieldbuilding (generalist) constraint
Just in mid-April 2026, at least 3 posts in EA Forum (here, here, and here) were published about the difficulty of hiring fieldbuilders and standard operations staff. For clarity, here are my definitions of both:
Operations - the backend of every organisation. While the researchers do their thing, someone has to manage contracts, organise Notion guides, respond to queries in the joint mailbox, and make sure that you get your salary on time. If you are scaling your organisation, you likely need that one person who knows how to coordinate recruitment processes, what systems will help manage the growing team, and how to avoid common failure modes. In small organisations, operations mostly include generalist positions, but in some cases they might also include specialist positions, e.g., in finance or event management. The amount of AI safety context required for operations varies a lot. As a rule of thumb, in smaller organisations almost all operational positions require some familiarity with the field, while in bigger organisations there are many positions where it’s not needed (e.g., finance).
Fieldbuilding - It’s exactly what the word suggests: building the field. It can include attracting more promising talent to the field, organising fellowships, matching promising candidates with potential employers, providing career advisory services, advising donors on where to donate, or improving coordination between organisations. Many fieldbuilding positions are operational, but they often require more strategic thinking and familiarity with the AI safety ecosystem than non-fieldbuilding operations. As a fieldbuilder, you will likely make important strategic decisions frequently, and for that you need to have a strong mental model of how the field works and what its priorities are. If you work on attracting and training talent, you should probably understand what profiles are and will be needed in the field, or how the talent can be deployed; if you advise donors on which organisations to support, you should probably have good takes on who is producing the most counterfactual impact. For each of these decisions, you need a lot of tacit knowledge that can’t be brought from any other field, or learned through reading groups.
Operations and fieldbuilding positions overlap with each other a lot, and both categories tend to require similar, broadly generalist skillsets.
Now that we’ve clarified that: are we constrained on fieldbuilders and operators?
Operational positions seem to be among the most difficult categories of roles to fill. There are two specific types of ops that have slightly different needs. For “hard ops”, like finance or legal, organisations can typically hire talent from outside of the AI safety community. For “soft ops” positions, like program management or talent management, context and understanding of the field is usually needed, and therefore hires often have to come from inside the community.
Unpublished data and interviews described in this post suggest that a particularly scarce sub-category of operators are senior professionals who could fill such positions as the Head of Operations, Chief of Operations, or Chief of Staff. These positions are critically important, since if they remain unfilled, they might block the organisation’s ability to scale. In other words, hiring a good Head of Operations might allow an organisation to hire several more researchers or other employees producing impact more directly. However, it’s also not shocking that we lack this type of talent, given that it might take insane amounts of effort to transition from business to AI safety as a senior operator. Anecdotally, it also seems that organisations face a difficult trade-off when hiring for senior operational positions: they usually need someone who is both experienced and mission-aligned, but the market mostly offers well-aligned juniors or unaligned seniors.
For fieldbuilding positions, all the operations-related difficulties apply, and there are some more. Most obviously, there are close to no pools of candidates outside of the AI safety community that have experience in fieldbuilding. The closest thing I can think of are EA community builders, but that’s also the pool that I came from, and let me tell you: the road from organising an EA community to meaningfully contributing to AI safety as a fieldbuilder is long and bumpy. Even a junior fieldbuilder needs to have a deep understanding of x-risks, familiarity with AI safety organisations and communities, a broad network of connections with other AI safety people, good takes on strategy, and a broad context of AI safety and how it works. And since the field is extremely dynamic, and every week brings new important developments, it’s genuinely challenging to follow what’s happening and how that shifts the current situation. I was lucky enough to be accepted to the fieldbuilding track of the Astra Fellowship, which brought me to the Bay Area, helped build connections, provided a fantastic mentor, and allowed me to work on my professional development full-time for 6 months. I can’t imagine coming from outside of the AI safety community and becoming a fieldbuilder without a similar opportunity (which makes me quite excited about the new Generator Residency).
You might argue that there is a group of people inside the community that are naturally positioned to become fieldbuilders, namely: organisers or AI safety groups at universities. This is true in a way, but most of them still aim to become researchers, which is understandable, given that a) they usually study computer science or a related degree, and they can use that to become researchers, and b) the technical positions typically offer better salaries. However, with recent efforts by Kairos, there are now more organisers considering fieldbuilding careers, so perhaps we will see them fill this hiring gap soon.
Both fieldbuilding and operational positions usually require candidates with generalist skillsets, but the current talent pipelines do not bring generalists in sufficient numbers. We are training disproportionately few generalists compared to research specialists, with few organisations actively working on the bottleneck. The scarcity of dedicated programs makes it quite difficult for generalists to get their first AI safety job.
For most people entering AI safety, participation in fellowships helps improve one’s familiarity with the field, gain valuable context, prove mission alignment to potential employers, and overall become a more attractive candidate. But the fellowships tend to strongly select for research skills, and people without this background are unlikely to be admitted. And since it’s difficult to become a good hire without participating in research fellowships, it seems like the most promising way for a generalist to get a job leads through personal connections.
Based on anecdotal information, it seems like many generalist openings attract hundreds of applicants; the issue is that almost none of them meet the expectations of the relevant organisation. So perhaps there are many people out there who would be happy to fill these roles if we support their career development similarly to how we support researchers.
Grantmakers
According to a recent post by Julian Hazell (Head of AI Strategy at Astralis Foundation, previously a grantmaker at Coefficient Giving), there are currently 30-60 grantmakers working on AI safety in the world, and they collectively distribute hundreds of millions of dollars each year. Julian mentions that the grantmaking teams are so capacity-constrained that promising opportunities are often left on the table. Jake Mendel from CG writes, “If there's something you wish we were doing, there's a good chance that the reason we're not doing it is that we don't have enough capacity to think about it much”.
With the growing number of people interested in working on AI safety, we can expect that the need for funding will be growing as well. And it seems quite likely that there will be more money flowing into the field: according to the current state of this Manifold market, there’s an 84% chance that Anthropic IPOs before the end of Q1, 2027. That would likely bring way more money into AI safety than we are reasonably ready to manage. To direct funding to the most promising projects, we will need way more grantmakers.
An additional issue in the funding landscape is that a significant share of philanthropic funds going into AI safety, likely way over 50% of it, comes from Coefficient Giving, previously known as Open Philanthropy. Due to various constraints on CG’s positioning, they don’t give grants to whole wide groups of projects, and therefore, many promising initiatives might struggle to receive funding. Having more funders in the ecosystem could unlock these opportunities.
According to 80,000 Hours, “Because grantmakers need detailed knowledge of the area they’re working in, becoming a grantmaker often involves first becoming an excellent generalist in a specific field. For example, if you wanted to end up making grants in AI safety, you could work in technical AI safety, work your way to management at an AI lab, and then perhaps move into grantmaking if that seems like the highest-impact way to go”. This post from EA Forum agrees that you need quite some experience, but also argues that grantmaking is more like an advanced skill to acquire, or a career stage to reach within your focus area, and not a separate path in itself. Probably Good recommends it as “a potential opportunity branching out of other careers, rather than an early career trajectory in and of itself”. A good grantmaker should be broadly familiar with their relevant field of expertise, have a strong network and a good theory of change, and be able to understand various project proposals and quickly spot errors. These are all characteristics that are typically developed through many years of work, and that seem to be difficult to teach to junior staff.
This does not align with some posts and opinions coming directly from grantmakers. Julian Hazell says that CG was his “first real, full-time, big boy AI safety job after finishing grad school”, and Jake Mendel writes that “CG’s technical AI safety grantmaking strategy is currently underdeveloped, and even junior grantmakers can help develop it”. So maybe the bar is not as high as 80,000 Hours puts it.
On the other hand, some doubts have been expressed about the current AI governance and policy grantmakers’ qualifications, in particular, in them having little hands-on experience in direct advocacy work. Since this seems to be quite a legitimate concern, I decided to talk to a grantmaker working on AI governance, who shared some reasons behind that situation:
- Opportunity costs of an impactful senior AI safety person moving to grantmaking are usually huge and may outweigh the benefits (as a reminder: AI safety is constrained on senior talent across many categories of roles).
- It’s difficult to hire grantmakers, because it requires the conjunction of many skills/traits, so the process usually involves a tradeoff between hiring an inexperienced but aligned person vs hiring an experienced person from outside of the AI safety / EA community. The latter category does not always turn out to be a good choice. Since many of the most important characteristics of a good grantmaker, such as value alignment or agency, are not inherently tied to experience, it’s usually a better bet to hire a young person and give them time to learn, although there are exceptions.
- Advocacy experience isn’t as valuable as you’d think. The feedback loops in policy work are weak and long, such that even working directly in political advocacy does not really deliver much useful learning about what actions were effective, or whether a given policy is actually impactful in reducing x-risks. Sure, grantmakers are a step further from the topic than advocates, but even advocates have little clarity about which of their actions were good for the cause (as evidenced by them often strongly disagreeing with each other).
For transparency, it’s important to add that the person who provided the arguments started to work in grantmaking early in their career.
The difficulty with hiring grantmakers seems to be true: CG has been struggling to find good candidates to work on their programs. While I don’t have enough information to share about how challenging hiring has been for other foundations more recently, Asya Bergal’s post from 2023 pointed to a significant capacity constraint in the EA Long-Term Future Fund, and to structural issues contributing to it (e.g. fast turnover of fund managers).
Another potential bottleneck could be the foundations’ ability to scale: Longview Philanthropy grew the numbers of their grantmakers from 4 to 10 people in a year, and CG’s technical AI safety team tripled their headcount in the same time. Growing faster than that without losing quality could be unsustainable, and this might be a limiting factor for the teams’ capacity. This is a guess that is not based on any of the foundations explicitly stating that this is an issue, and it should not be taken as a fact.
Scaling
We might need to scale AI safety very fast if we want to resolve alignment on time, and we should probably be optimising for growth a lot. However, there are only two ways to do that: founding more organisations or scaling the existing organisations. Both require leadership and/or operations talent, which we might not be delivering in sufficient numbers.
Leadership
AI safety seems to systematically reward researchers more than non-researchers, even those who work in leadership positions. This is the opposite of most human systems in the world outside of AI safety. If you imagine a corporate company, a public office, or a political party, they are usually full of people competing for leadership positions. Becoming the CEO of a big company, the director of a public office, or the leader of a political party comes with higher social status, more visibility, a higher salary, and new affordances. It usually also comes with more risk and responsibility, but these are - at least for some people - worth taking for all the rewards. Meanwhile, founding or leading an impactful AI safety organisation often does not come with additional rewards, compared to becoming a technical researcher. There is little reward to outweigh the risk and responsibility that the leaders have to take, which might disincentivise this type of work.
An obvious objection here could be that not every potential leader chooses between a leadership role and a research role; many people might face the choice between a leadership position and a non-research position, e.g., in operations, and in this situation, the current reward system should work quite well (e.g., the salaries for leaders are higher than for operators). However, this is quite rarely the case, as our talent pipelines are primarily optimised for researchers, and people who enter the field are usually selected for research backgrounds. I am not keeping statistics on that, but I believe a significant share of leaders of current AI safety NGOs probably either are or used to be researchers. While some of them do amazing job leading their organisations, this might still not be the most optimal setting: if someone is a great researcher, the chances that they are also a great leader are quite low, since management is probably not their primary skillset, and being a great researcher does not automatically mean being good at e.g., strategy, which is often needed in leadership positions.
But are we actually constrained on leaders? Given the difficulty of finding senior staff across the field, my intuition is, that we are; anecdotally, I have heard of a few organisations that have been struggling to hire a leader for a long time. However, it is difficult to verify the scale of the issue, since the scarcity of leaders might be quite hard to detect. If an organisation has been struggling to hire a new leader after the previous one left, a researcher might step into management out of duty rather than fit, which makes the problem invisible from the outside.
In the 2024 Meta Coordination Forum survey, leadership/strategy skills were expected to be the most valuable to recruit for across EA. However, it’s unclear to what degree this applies to AI safety specifically. When it comes to skillsets needed in leadership positions, it seems like the combination of management + operations/strategy/policy is the most needed. It's not clear whether the supply of talent with these skillsets is sufficient to meet the demand.
Founding new organisations
There are a few incubators in AI safety that support potential founders (e.g., Catalyze Impact, Seldon Lab, the Constellation Incubator, Halcyon Futures, BlueDot), and some new organisations arise from programs that are not primarily incubation-oriented (e.g., Apollo from MATS). This is great! But there is a hidden trap: getting accepted into an incubator or getting funding for your new organisation typically requires gaining credibility. And the way to gain the credibility usually leads through the regular research career pipeline, which is very selective, and quite effectively filters non-research talent out. This leads us back to the issue of turning researchers into leaders.
Founding an organisation means taking a lot of risk; there are numerous ways in which the organisation might fail to produce impact, and many organisations collapse quite quickly. And as described above, the reward structure of AI safety as a field does not really encourage taking the risk. On top of that, people who want to dedicate their careers to reducing risk, which means almost everyone in the whole AI safety community, are probably not very risk-tolerant, which might effectively reduce the number of potential founders in the community.
Having said all that: I have not found any solid evidence of AI safety having too few founders. I have no knowledge of how many organisations are founded in AI safety each year (I estimate low tens), how many of them survive (I estimate even lower tens), and how many are actually impactful (perhaps just a few). All posts I’ve found related to the topic are based on anecdotal experience and/or reasoning from vague information, and not on solid data. I am also not aware of any incubator publishing information about a scarcity of promising founders. My guess is that we could use more of them, although it might as well be that there are enough potential founders, but not nearly enough infrastructure that could support them.
Scaling the existing organisations
As AI safety matures, its organisations grow. And as the organisations grow, we need to change the start-up-style, scrappy processes into ones that allow organisations to scale and work more effectively. But for that, we need people who can bring professional experience from the external world and apply that to fulfilling the organisations’ mission. This, however, is quite a challenge, given the difficulty of hiring senior operations staff, and the problematic tradeoff between experience and value alignment that seems to be present across most types of positions. There is a clear need for people who are both experienced and aligned, and the demand for them is far from being satisfied.
There are several organisations working on bringing experienced talent to the field, including but not limited to Successif, Probably Good, Consultants For Impact and High Impact Professionals, which provide career advisory services, organise career-related events, etc. However, we lack more scalable solutions that would 1) reach big pools of experienced people who might be open to transitioning into AI safety, 2) provide them with opportunities to gain experience and credibility needed to succeed in the transition, and 3) deliver the talent to organisations at scale. As a point of reference, I think we’re doing quite well in attracting young people into AI safety through AI safety groups, which 1) reach huge numbers of promising talent, 2) allow the groups’ organisers to gain valuable experience and credibility long before getting employed at an AI safety organisation, therefore making them particularly promising hires, and 3) are quite a scalable solution that can be entirely managed by one small organisation (Kairos, which until last month had only 3 employees).
While it’s pretty clear that the university groups format would probably not work so well for senior professionals, there are likely untapped opportunities for building similar solutions adjusted to their needs. I am not super confident that we can create a system that is equally effective, since young people are typically easier to source, but there are probably quite many ways to approach it that have not been tried before.
Limitations & caveats
I did my best to include all the most important information about the current shape of talent pipelines, and tried to present it as reliably as I can; however, my writing might still include some inaccuracies. I am quite new to AI safety (at the time of publishing, I have been around it for 4 months) and therefore I might lack a lot of context, which could lead to mistakes. To make up for that, I sought feedback from many people with more experience in AI safety or in its specific sub-fields.
I have made this list of all resources I’ve found about the hiring needs and talent pipelines. If you have the capacity to check the facts yourself, I highly encourage you to do that. If you spot any mistakes, please let me know. You can comment under this post or message me at weronikamzurek@gmail.com.
One more thing I would like you to notice is that the whole post is heavily based on information from just a few people/organisations that publish a lot of stuff, and in particular on posts from CG grantmakers (in the grantmaking section), as well as Ryan Kidd and MATS (all the other sections). This might make the data biased in ways that are difficult to predict. To combat that, I would encourage you to write more about your own organisation’s needs, challenges, and experience. Your post could become a valuable source of information for others in the future.
Some information that I think should be published, but it’s not
Unfortunately, there are many points that I wanted to include in the post, but for which I did not have sufficient data. The topics I would like to see covered most include:
- hiring needs at policy/governance organisations. There is currently close to zero information about that available publicly
- hiring needs in foundations other than Coefficient Giving
- forecasts and takes on how hiring needs might change, e.g., how the growing cybersecurity capabilities of models might impact the need for information security skillsets in x-risk-relevant roles
- incubator constraints: do incubators struggle to find good founders, or do they have abundance of them? What makes people promising founders?
- data and/or opinions about the supply of leadership talent into the AI safety field: do we have enough leaders? How do non-researchers become leaders of NGOs?
- reasonably reliable estimates of the number of people working full-time on AI safety in 2026, across all types of institutions, and also including researchers, advocates, fieldbuilders, operators, leaders, etc.
If you are in the position to share data, knowledge, intuitions, or takes about one of the above topics, I encourage you to do that.
- ^
The latest high-quality estimate I have found comes from Benjamin Todd and was made in 2023: https://forum.effectivealtruism.org/posts/rZoRGxJzipcQoaPST/how-many-people-are-working-directly-on-reducing-existential. I would love to see similar estimates for 2026.
- ^
It might feel like an obvious solution here would be to use LLMs more for screening applications, but anecdotally, this does not seem to work very well, and people working on these programs are usually very sceptical of having AIs review CVs, sometimes with the exception of using them to reject the most obviously mismatched applications.
- ^
It’s difficult to provide an exact number, since some programs only have governance-specialised mentors or streams from time to time, or work on the border of governance and another subfield
