Hide table of contents

Thanks to @Mathieu Duteil for all of his insights in writing this post!

TL;DR

EA has a strong infrastructure for inspiring people to pursue impactful careers. The infrastructure for helping them actually get there, especially in operations, is far less developed. After applying to dozens of operations roles over the past 6 months, I observed patterns suggesting inefficiencies in how EA organizations hire: they run parallel processes for similar roles and screen for overlapping competencies. I propose a shared hiring infrastructure, including a common application layer, candidate process history, and coordination between organizations scoping similar roles.

I. The Gap Between Motivation and Opportunity

EA has built excellent infrastructure for inspiration. Career bootcamps, introductory fellowships, and governance courses help people identify cause areas, understand the landscape, and commit to a direction. For someone transitioning from a different field, these programs are genuinely valuable. I have experienced this firsthand. Since encountering Effective Altruism, I have taken courses on AI governance and biosecurity, participated in the ML4Good governance bootcamp and the CEA Operations Career Bootcamp, and attended EAG and EAGx events that reinforced my motivation to work on these problems. The inspiration infrastructure works. The issue is that there is no structured path from “I know what I want to do” to “I am doing it.”

A researcher can suggest a paper idea. A policy analyst can pitch a brief on an emerging regulation. Operations work does not offer the same entry points. It is defined by organizational needs rather than by portable expertise. You cannot easily say, “I have this project I want to build for you,” because the work is inherently shaped by the organization’s existing context and gaps. This makes the path from motivation to contribution harder to navigate, and it makes the hiring process the primary bottleneck for operations talent entering the ecosystem.

The hiring challenge extends beyond operations. Abraham Rowe, in his post last November, argued that recruitment is the most undervalued function in high-impact organizations. He noted that organizations routinely struggle to find recruiting talent, rarely backtest their hiring practices against later performance, and rely on rudimentary tools for candidate evaluation. He concludes that almost no one is appropriately obsessed with hiring. This post argues something complementary from the candidate side. What follows describes what I have observed over months of navigating that bottleneck, and what might reduce it.

The difficulty of entering EA operations work is not a new observation. A 2024 survey by Julia Michaels of 91 job seekers found that 89% reported negative feelings about their job search, with employer hiring practices, a lack of feedback, and opaque decision-making cited as the primary barriers. In interviews, candidates described the process as a “black box,” even when individual steps were clearly communicated, suggesting a gap between explicit and implicit processes. That research focused broadly on EA roles; the operations track, in particular, has received less attention, which is part of why this post focuses on it.

II. What Dozens of Applications Taught Me

Over the last six months, I have applied to dozens of operations roles, mostly at AI governance and biosecurity organizations. The process has been instructive, though not in the ways I had hoped.

At least twice, I completed full application cycles for roles that were later cut. In one case, I had finished a work trial. In another, I had completed a final interview with the COO. Both times, I was told the decision stemmed from organizational pivots. I understand that organizations in this space face genuine uncertainty. Funding landscapes shift, priorities change, and what seemed essential in October may no longer make sense by December. While work tests themselves have value (they offer payment, something for your portfolio, and a chance to test fit), the issue is investing that effort in a role that may no longer exist by the time the process concludes. The organization’s staff, too, invested time that they will not recover.

Moreover, I noticed a pattern in the application questions. In the last quarter of 2025, at least five AI-adjacent organizations were simultaneously hiring for similar operations roles. Across the five roles, I answered variations of the same questions: describe your most impressive project, describe a system you built or improved, explain your relevant operations experience, and explain your interest in AI safety. The phrasing differed, but the substance was the same. The obvious objection is that candidates could simply copy and paste between applications. In practice, they cannot. Each application uses different word limits and slightly different framing, so candidates spend time reformatting the same answer rather than demonstrating anything new. These organizations were running parallel processes, screening for similar competencies, and evaluating overlapping candidate pools.

Consider the numbers, using conservative assumptions. If 300 people apply to a single operations role and each spends 30 minutes on the initial application, that is 150 hours of candidate time for one position. If five organizations run similar processes in the same quarter, that is 750 hours of applicant time on initial applications alone, before accounting for screening calls, work tests, or interviews. Under less conservative assumptions (400+ applicants per role, 45 minutes per application given tailored cover letters or written responses, and additional hours for screening calls and work tests), the total across five roles could easily reach several thousand hours. This does not include the staff hours spent reviewing those applications, conducting screening calls, and evaluating work trials. These numbers are rough estimates, but they illustrate the scale.

Most of these applications included a checkbox asking applicants whether they consented to share their material with other organizations. I checked that box whenever it appeared. It never resulted in any contact from other organizations, even those that showed a sustained interest when I applied to their roles. This is worth pausing on. Organizations have already shown willingness to share candidate information. What is probably missing is the follow-through and the capacity to scale it.

This is not a new concern. Over the years, there have been accounts of the same issues that I am mentioning: rejected candidates being ignored as a resource, the consent to share applications not leading anywhere, application processes that take weeks... These cover early-career researchers, mid-career operations candidates, and everyone in between. The pattern is consistent.

III. The Cumulative Cost

Through dozens of applications, organizations have evaluated my writing, judgment, and ability to perform under time pressure. That information now exists in many places, but there is no mechanism to carry it forward. It would feel awkward to ask an organization that rejected me for a role to recommend me, even though the evaluators may genuinely have positive things to say about my work. The social dynamics of hiring create a situation where useful information is generated and then lost. 80,000 Hours’ research suggests that work-sample tests are among the best predictors of job performance, which makes it worth asking why their results are not shared more widely (with candidate consent). Organizations may keep this information or share it internally. As a candidate, I walk away from each process with nothing to show for what I demonstrated.

This compounds over time. The emotional toll of repeated applications is real. So is the structural cost. Talented people with financial constraints cannot wait indefinitely. The process also selects against candidates who are strong enough to be already employed elsewhere: they do not have the time to navigate lengthy, parallel application cycles. The ecosystem loses people on both ends: those without the runway to keep searching and those too busy doing good work to jump through repeated hoops.

If the path to impact requires months of unpaid searching, networking, and speculative project work, those without a runway will leave. The ecosystem loses them not because they lacked commitment, but because they lacked resources. Programs like Open Philanthropy’s Career Development and Transition Funding exist precisely to address this gap, supporting career exploration periods and professional development. This is valuable infrastructure, and I am glad it exists. However, it is focused on certain cause areas, and the number of people facing this situation likely exceeds the program’s capacity.

I am not alone in this experience. Through EA career programs, I have met other mid-career professionals with relevant skills and a genuine commitment to these cause areas, who have described similar patterns. These are exactly the people the ecosystem should be absorbing, not losing to unnecessary process friction: repeated applications, roles that vanish, uncertainty about what organizations actually want from operations candidates (which tools matter most, what level of technical fluency is expected, whether project management experience outweighs domain knowledge…), and no mechanism to carry forward the vetting they have already undergone. None of this is an indictment of individual organizations: they are resource-constrained and doing their best with limited capacity. But the cumulative effect is worth examining, and there may be ways to reduce it.

IV. What could help

Several organizations already work on adjacent problems. Pineapple Operations maintains a candidate database for operations roles. Impact Ops helps individual organizations with hiring. These are useful, but the gap I am pointing at is specifically about coordination between organizations running parallel processes, and about preserving information generated during hiring.

Some organizations are already experimenting with different approaches. There are roles where the organization hired a consultant for a few months while running a longer search in parallel, allowing both sides to test fit without prolonged uncertainty. Other applications have detailed, role-specific questions rather than generic prompts, which signal seriousness and precision. These examples stayed with me because they felt different. There may be more to learn from what is already being tried. A fuller analysis of what exists and where gaps remain would benefit from conversations with these organizations, which I have not yet done.


There are also instructive precedents outside EA. University admissions faced a similar problem of parallel processes. The Common Application did more than reduce paperwork: evaluations found that participating institutions enrolled a more geographically diverse student body, including higher-performing candidates they would not have otherwise reached. Lowering friction expanded the talent pool.

In the tech industry, Triplebyte offered a shared technical assessment that allowed companies to skip redundant screening for software engineers, although the company shut down in 2023. A retrospective by its former Head of Product, after the company’s shutdown, identified a lesson relevant here: changing established hiring behavior is extremely difficult, even when the existing process is widely disliked. That risk applies to any version of what I am proposing. Still, she has since founded a new company pursuing a similar model, which suggests the underlying idea retains value even if the first attempt failed.

EA needs a shared hiring infrastructure, something that sits between organizations and helps them coordinate. The obvious question is: why does this not already exist? I see three likely reasons. First, coordination requires someone to build and maintain the infrastructure, and no single organization has the capacity to take that on alongside its core work. Second, there is no one whose job it is to maintain such a system. Third, each organization may believe its operations needs are specific enough that shared screening has limited value. Others closer to organizational strategy may see additional factors.

The third objection deserves a direct response. Operations roles vary by context: the systems, tools, and team dynamics at an AI safety lab differ from those at a global development nonprofit. But the core competencies being screened for in initial rounds (clear writing, structured thinking, systems design instinct, project management ability) are remarkably similar across organizations. The point is not that organizations should skip bespoke evaluation entirely, but that the first layer of screening, which is where most of the duplicated effort occurs, could be shared. Organizations have already shown willingness to share candidate information: most applications include a consent checkbox for exactly this purpose. What is missing is not the intent but the infrastructure to act on it.

Concretely, what I am describing is a shared platform, maintained as a community resource, that sits between organizations and candidates:

Shared application infrastructure. The platform would have a private profile for each candidate. Common questions would be addressed once in the profile: background, motivation, and familiarity with a specific cause area. The key is that these would be the same questions across organizations, not merely similar ones that still require starting from scratch.

A shared layer would eliminate that rewriting while still allowing organizations to add genuinely role-specific questions. It could also include candidates’ reasoning for common operational scenarios, allowing organizations to see how they think through problems they would actually face rather than relying solely on descriptions of past experience. A position may have diverse specific responsibilities, such as organizing a workshop or managing visas, but asking for each of them would make the application process too long. So, although they might not want to ask about these specifics, they may appreciate learning more about how an applicant answered a similar question for a different position.

Process history and work test evaluations. The platform could also track what hiring processes each candidate has completed and how far they progressed. Not “vetting” in the sense of endorsement, but transparency about what screening has already been done. Organizations could decide how much weight to give this information. When an organization finishes a hiring round with strong finalists it cannot hire, it could facilitate introductions to other organizations hiring for similar roles. This is where the consent checkboxes could finally lead somewhere.

Where both the applicant and the organization are willing, this record could also include brief evaluations from work tests. For confidentiality reasons, these would not necessarily contain the full work test. Instead, organizations could share something like “this person completed a task on systems building and performed well,” or “showed strengths in event logistics but less experience with financial administration.” Any such system would need strict access controls. Applicants would see only their own profiles, and organizations would need to be vetted before accessing the database.

Other things worth considering: 

Role scoping support. Is this role actually needed? Is another organization already hiring for the same function? Can the organization realistically complete this hiring process given current capacity and funding uncertainty? An outside perspective could help resource-constrained teams answer these questions before launching a process. This might reduce the number of roles that disappear mid-process.

Visa clarity. A smaller but real issue is the lack of clarity around visa sponsorship. Some organizations clearly state their position at the top of the posting, saving everyone time. Others leave it ambiguous until late in the process. For candidates outside the US or UK, a simple upfront disclosure would reduce unnecessary effort on both sides.

V. Similar proposals

This topic has come up several times on this forum. In May 2022, Charles He published a detailed case for an EA Common Application, with founding team structures and funding estimates, and noted that a version had come close to receiving support from a major grantmaker before the prospective founder withdrew for personal reasons. In August 2022, Anya Hunt and Katie Glass published a systematic analysis of EA recruiting gaps that named the same three needs this post identifies: a shared candidate layer, a coordination mechanism for referrals between organizations, and a process for routing strong near-miss candidates to other open roles. They noted the consent checkbox and the same absence of infrastructure to act on it. Two years ago, Elizabeth Cooper mentioned that BERI had been looking into a Common Pre-Application for AI Safety. These posts are several years old, yet, as far as I know, none of what they proposed has been built. It would be extremely useful to have post-mortems on these projects to better understand their feasibility, the obstacles they encountered, what they achieved, and why these ideas were eventually dismissed.

One reason it would be particularly important to know is precisely that these examples are several years old. Back then, 80,000 Hours had not yet pivoted to prioritize AI safety careers, and the number of AI safety organizations was much smaller. This could reinforce the need for a common screening layer. The changing landscape is not the only reason this idea may be more urgent now.

Moreover, if rewriting new versions of similar essays over and over would have been a Sisyphean task in 2022, in practice, this is just pushing applicants to let an LLM answer for them. This wastes recruiters’ time or, conversely, pushes them to rely on LLMs to handle the workload, making the whole exercise even more absurd. A shared first layer, answered once and owned by the candidate, at least concentrates the effort where it can be genuine. In other words, this could be an idea whose time has come.

VI. Where This Analysis May Fall Short

It is possible that the bottleneck is not coordination but demand. If there are simply fewer operations roles than qualified candidates, better application infrastructure does not change the ratio. It is also possible that the current system, despite its costs, is rationally optimized for fit rather than efficiency. A bad operations hire at a small, high-impact organization can do real damage, and organizations may be willing to pay the cost of redundant screening to avoid it. One could also argue that lengthy, bespoke applications filter for motivation, though this argument has weakened considerably in the age of LLMs.

I cannot rule out any of these, but even if demand is the deeper constraint, reducing duplicated effort still frees capacity on both sides. If intensive screening is worth the cost, it is worth asking whether it needs to restart from zero every time. A shared system does not mean a lower bar. It means organizations get the same signal without requiring candidates to regenerate it from scratch, and they may also gain access to information they would not have thought to ask for. Finally, the morale cost is worth restating. A process that repeatedly discards demonstrated competence does not just waste time. It wears down the people the ecosystem most needs to retain.

I would be interested in working on a project like this. I have the operational experience and the candidate-side perspective, but I do not have HR expertise to do it alone. If you do, and this problem resonates with you, I would like to hear from you. I am also aware that enthusiasm and lack of follow-through are documented in this idea's history, and that noting interest in a forum post is not the same as building something. What I hope this post contributes that the earlier ones did not is a clearer account of the cumulative cost, a candidate-side perspective on where the duplication actually occurs, and enough specificity about the platform components that anyone with the relationships and organizational capacity to act on this has a concrete starting point rather than a general direction.

Note: The recent AMA with recruiters at impact-focused orgs offers perspectives from the other side of these processes as well.

32

4
0

Reactions

4
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Thanks so much for sharing!

Last year, I explored building a common application for EA/AI organizations, in collaboration with a funder in the space.

Specifically, we explored a version that might work like:

  • Applicants submit one application, and indicate organizations they'd be happy for application materials to be shared with.
  • Applicants, based on role type or skills, might get screening interviews / work tests from the common app system.
  • When an organization is ready to hire, they can quickly pull from pre-vetted candidates, skipping initial screening (since they have materials from our assessments).
  • Applicants save time by going through a process once, orgs save time by getting to skip advertising and initial screening windows.
     

I surveyed several dozen organizations about this idea, and talked to a few organizations directly about it. Here's what I found:

  • Organizations wanted this to exist.
    • Organizations would be happy to recruit candidates out of a shared hiring pool.
  • Organizations wouldn't rely heavily on this for applicants.
    • Organizations were generally unlikely to want this to be the only source of candidates. This means that they'd still open their own applications anyway.
    • I think this is primarily due to wanting diverse candidate pools / seeing value in doing their own advertising.
    • Organizations also generally wanted candidates to go through their own application process separate from the common app - basically, organizations perceive themselves as having heterogenous application processes.
  • On organizational self-reporting, no money would be saved.
    • While this process seems like it might produce savings, based on the time savings organizations reported this would generate for them, my estimate was that the cost-effectiveness of a funder paying for this service to exist was pretty low.
    • Basically, the issue is that without targeting a specific job, we end up vetting and screening a lot of people who might not be a good fit for roles in the ecosystem, and who might be quickly passed over by organizations.
    • My estimate of the time/cost it would take us to run the program, vs the self-reported time savings from organizations, was that it wasn't cost-effective / wouldn't really save the ecosystem money.

 

That being said, the program I explored was more comprehensive than just a common app. The issues I see with a pure common app are:

  • The organization running it would need to have sufficient credibility for the organizations using it to want to forego their own application processes. I think a random person starting it would have very low credibility. My company, which had run several dozen hiring rounds for many organizations had maybe 50% the credibility necessary. This seems like a hard bar.
  • Candidates have to trust the centralization — e.g. if the common app service also does vetting (which it doesn't have to do, but which has the most value for the organizations using it), then they have to do a good job, as the stakes are high!

 

That being said, @Nina Friedrich🔸 and High Impact Professionals is doing tons of amazing work here, including some partial implementations of some of these ideas — their talent database, with candidate consent, lists organizations that candidates were finalists with, which is really useful for hiring.

RE sharing candidate information: this practice is really widespread in the ecosystem. I get probably 3-5 emails a month asking for referrals for candidates for roles, and typically share silver medalists from our similar hiring rounds who consented to sharing. 

I think part of the disconnect is that organizations aren't really optimizing on candidate time — they are optimizing on their own time and needs (whether or not this is a mistake).

Thanks again for writing this up! I think there are huge gains to be made here, and hope my notes on my exploration of it are useful for anyone thinking about it!

The organization running it would need to have sufficient credibility for the organizations using it to want to forego their own application processes. I think a random person starting it would have very low credibility. My company, which had run several dozen hiring rounds for many organizations had maybe 50% the credibility necessary. This seems like a hard bar.

 

I feel like a service that aspires to eventually be a common app could shift towards that incrementally by offering partly-vetted candidates. It's not a fully centralised common app, but gets customers/sign ups from orgs who just want access to another sour e kf high quality candidates

That might reduce some of the value prop to initial candidates at first, if the service doesn't have many confirmed clients yet, but I suspect that (1) quite a lot would apply anyway, even without confirmed buy in from orgs, if the pitch was done well, (2) there might be other ways to make it appealing, e.g. finding ways to offer some (automated?) feedback.

Thanks for weighing in, Jamie! This is the kind of insight I was hoping for.

I agree with your point about incremental change. A partly-vetted candidate pool as a first step seems like a more viable path to building credibility. I think this can be built on the current systems available in EA. Candidates are usually looking for new opportunities to reach organizations, so I think they would be interested if the time invested is reasonable. Feedback would certainly be a plus, but maybe it could lose value if the automatic feedback is too generic.

Hi @abrahamrowe , would you be willing to share more information on this point? 



Organizations wanted this to exist.

  • Organizations would be happy to recruit candidates out of a shared hiring pool.

I'm preparing an article with @Anaeli V. 🔹 and others about this and would love some more evidence that organisations are looking for a simplified system. 

Could you also clarify this point? Why do you think it would generate no savings despite organisations reporting they would save a lot of time?
 

  • While this process seems like it might produce savings, based on the time savings organizations reported this would generate for them, my estimate was that the cost-effectiveness of a funder paying for this service to exist was pretty low.

 

I see and agree with your point regarding credibility. Would you mind sharing why you think your organisation didn't achieve the necessarily credibility in the eyes of recruiters, and what do you see as conducive to reaching the necessary credibility?
 

Thanks in advance for your help! :D 

RE Organizations want this to exist:
- I think that something like 20ish organizations reported that they would use a common app system, at least for operations roles (I think they were much less likely to use it for other kinds of roles, but it was dependent on seniority, etc).

RE it not creating savings:
- I asked organizations about various ways that this would save them time. In total, my estimate was a common application + pre-vetting would save organizations 500-1350 hours per year (based on their reports on how they'd use it and how much time they spend on hiring). 
- A common app alone might be half that? So 250-675 hours per year?
- My estimate is that it would have cost more hours than this to run well.

I think the primary reasons for this are:
- Organizations won't only rely on the common app - they'd like easy ways to get candidates, but also want to recruit on their own platforms. For many non-ops roles, they didn't really want to use it at all.
- The common app will get a lot more candidates than organizations get — it both makes it easier to apply to jobs, so will increase applications, and makes is more generic, so more people will feel qualified to apply.

Note that I looked at this from the perspective of "if we do this will we spend more time running it than the time savings for organizations" and I think the answer was yes.

RE credibility:
- A lot of organizations were worried about centralizing application processing / decision making because it creates a single point of failure.
- If you are also vetting applications, the above is worse + they have to trust you in the first place to do the vetting.
- The organizations who would have trusted us to do the vetting tended to be groups who had worked with us before on hiring and had a good experience.


Happy to have a call to talk about learnings from this, since as far as I know, my project was the closest the ecosystem has gotten to having a common app! Overall, I agree with the sense of there being lots of inefficiency in the hiring ecosystem — the complicated thing to me feels like candidates often want to solve for the problem of the candidate experience being bad, while the organizations want to solve for the problem of the organization experience being bad, and the causes of those problems are somewhat different. 

Hi @abrahamrowe

Thank you very much for your detailed response. Your November post was a great source of inspiration for this, and I believe the community would greatly benefit from a post-mortem of your attempt to build this platform. In the meantime, I would certainly love to have a chat with you about these questions. From what I have seen, you seem to be one of the people in EA who have thought most about the practicalities of a shared application platform. I have also seen mentions of attempts at similar projects in related discussions: have you spoken with those people?

Of course, the organizations would decide whether to work with such a platform, so it makes sense to optimize for them first. I still think there are ways to improve the process for applicants, at least at no cost to the organizations and, to some extent, to their advantage. For instance, it seems that organizations are independently arriving at very similar questions for every operations role, so the shared platform would not reduce the information they get on a candidate compared to the current system. The candidates' answers would also not be any more generic if the questions were the same. In fact, they could rate the answer once, and not have to reread the same essays the next time they publish a different operations role, to which the same people will apply. Regardless, for EA as a whole, it would be valuable to recognise that not losing candidates to demoralization is also in the interest of organizations. This is especially relevant since a lot of resources are spent trying to attract people to EA.

Your point about how reputation would be essential for such an endeavor is an important one; I would really like to work on this, but you are right that I will never succeed without the backing of strongly established EA actors. Through discussions like these, I am hoping to get more people thinking about it until solutions start to emerge.

That said, an alternative I have in mind is something closer to a profile system than a traditional common application. Think of it as a private LinkedIn for operations roles (based on the existing HIP profiles, for instance): candidates fill out a set of standardized prompts, and that profile becomes a reusable asset. Organizations do not have to stop running their own hiring. They could simply include a line in their application that offers the option to link their profile to [platform], the same way candidates can often fill out a form or share their LinkedIn profile and have the form automatically filled with the profile's information. These questions would be complemented by any additional questions not covered by the profile that they would consider relevant. This could potentially save candidates hours of reformatting the same text to slightly different word limits, without taking control of the selection process away from the organization.

I would be excited to see how HIP implements what you mentioned: listing organizations where candidates were finalists. If candidates who reached final rounds had even brief comments on their performance attached to their profile (with consent), that would make the informal referral network you describe (3 to 5 emails per month sharing silver medalists) visible and accessible to candidates, not just to hiring managers who already know each other. This could address many candidates' concerns about the lack of transparency.

@AïdaLahlou also shared with me a draft of her post, with some great ideas on how to share feedback with candidates and evaluate them in different ways. I also think HIP's talent database with finalist history would align with her ideas.

I will be in touch about that call. I think there is a lot to learn from your experience.

Cool post! I thought it was well-structured and evidenced, while also recognising limitations and counterarguments etc. 

Thank you so much for this. Commenting for reach and also because I want to re-read later in depth. Very much agree the system is broken, although the problem is more general and not EA focussed. However, I do agree with you that the EA ecosystem has huge potential for streamlining the process due to shared values and usually similar recruitment processes. 

I'm preparing a piece about it and will DM you the draft - would love to get your input on this

Glad you like the idea. Looking forward to reading your draft! 

Curated and popular this week
Relevant opportunities