Hide table of contents

TL;DR

Funding for talent-focused organisations (TFOs) in EA remains highly concentrated, with more than 80% coming from Open Philanthropy and EA Funds. To understand whether clearer impact reporting could unlock new funding for the space, this pre-study interviewed seven funders.

While historical impact data isn't the main decision criterion for most funders, an improved impact reporting infrastructure could potentially open up more funding to the talent space. In addition, a shared reporting framework could support funders in making more informed decisions, and provide clearer funder expectations for the TFOs.

We suggest three next steps: (1) develop standardised indicators with input from both funders and TFOs, (2) gather TFO perspectives to complement this funder-focused study, and (3) explore the value of an independent evaluation agency for the talent space.

We define a TFO as any organisation whose explicit goal is to help people increase the impact they have through their careers. Some examples are meta EA groups like EA Sweden or EA Netherlands, and coaching or training organisations like 80k, SuccessIf, and BlueDot Impact. 

Background

The 2024 EA Meta Funding Landscape Report found that 80% of funding in the talent field is coming from Open Philanthropy and EA Funds, with the share being even higher when only talent-focused organisations (TFOs) are included. In a memo for the Meta Coordination Forum 2024, Patrick Gruban argues that the lack of legibility in TFO's impact reporting might be a hurdle for new funders to enter the field, and that some impactful TFOs are underfunded due to this. 

In the fall of 2025, Emil Wasteson Wallén led a pre-study to investigate this assumption and whether a shared impact reporting framework for TFOs could increase the total funding to the talent space. 

Below, we summarise the findings and outline three potential interventions for taking this project forward. 

Methodology 

Seven "funders" - individual donors, grantmakers, and philanthropic advisors - were interviewed to understand the funders' perspective. Two were established funders within the talent space. Three others had made, or seriously considered making, grants in the talent space at least once. The remaining two had no direct experience in the talent space but were active funders in other parts of the EA meta ecosystem (primarily effective giving), making them particularly interesting as potential future funders to the talent space. 

The interviews aimed to explore: 

  1. What evaluation criteria the funders use
  2. Whether a lack of legible impact reporting is a bottleneck
  3. What values a shared impact reporting framework could provide
  4. What important factors would be in such a framework 

Findings

1. Historical impact data isn't the main decision criterion 

Three funders reported using historical impact data as a key criterion for decision-making. But even for them, it wasn't the main decision criterion. A pattern was that larger and more established funders placed greater weight on historical data. 

The criteria that most funders highlighted as most decisive in their decisions were:  

  • Strategy: Whether the project's approach seems sound and well-targeted
  • Impact potential: How big the impact could be if the project did really well
  • Team: The competence, motivation, and track record of the people involved

Naturally, the younger a project or organisation is, the less relevant historical impact data becomes relative to these other factors. 

Finally, four funders mentioned that Open Philanthropy’s funding recommendations and decisions significantly influenced their own. 

2. Impact reporting could potentially make the talent space more accessible for new funders 

Of the five funders who were not already established in the talent space, one noted that uncertainty about an organisation’s impact had delayed their decision by nearly ten months, and that clearer impact reporting could have accelerated the process. 

The remaining four did not view the lack of robust historical impact data as a major gap in their decision-making. However, three of them said they would likely take such data into account if it were easily accessible and commonly used by other funders in the field.

This suggests that a shared impact reporting framework could potentially increase the total funding for the talent space. 

3. Additional benefits of a shared impact reporting framework

In addition to potentially unlocking new funding for the talent space, the funders highlighted three other ways in which a shared impact reporting framework could add value:

  • Improved decision quality: Existing funders could make more informed and consistent grant decisions, resulting in better resource allocation and a higher overall impact within the talent space.
  • Greater clarity for organisations: TFOs would gain a clearer understanding of funder expectations and decision criteria.
  • Efficiency gains: Standardised reporting and other shared resources could reduce the need for each organisation to reinvent the wheel, saving time and administrative resources.

4. Factors important in a framework

Assuming a shared impact reporting framework is developed, two aspects were highlighted as especially important to include for it to be useful. The first one is focused on the framework's content, and the other on its adoption

  • Standardised indicators: All but one funder emphasised that establishing standardised indicators would be essential, mainly to enable comparison between organisations in the talent space.
  • Adoption and distribution: Nearly all funders stressed that a great risk—regardless of the framework's quality—is low adoption. If only a small number of funders or TFOs use it, most of its potential value would be lost.

Recommendations

Based on the findings, we remain uncertain whether developing a full shared impact reporting framework would be valuable. We, however, believe there is evidence to support continued work. Below, we outline three promising future interventions, presented in increasing order of complexity. 

1. Standardise indicators

As a first low-effort intervention, we recommend establishing a few small sets of indicators for evaluating TFOs. This would help existing funders in the space to compare TFOs more accurately, provide new funders with a better understanding of the impact TFOs create, and enable TFOs to have a clearer understanding of how they are evaluated. 

We believe it’s essential that both funders and TFOs are actively involved in developing these indicators, to ensure diverse perspectives are represented and to build buy-in on both sides. More research and conversations are needed to determine the most useful indicators, but some examples mentioned by the funders include: 

  • Number of placements: individuals transitioning into high-impact roles
  • Attribution: The organisation’s contribution to those transitions
  • Time to impact: The lag between intervention and observed career change
  • Impact per placement: A quantified estimate of the total impact created through each transition

2. Better understand the needs of TFOs

This pre-study focused on the perspectives of funders. We think there is substantial value in complementing this with insights from TFOs themselves. They may, for example, identify bottlenecks, considerations, or practical challenges that funders may have overlooked.

These findings could also be beneficial to improve coordination and collaboration between the TFOs, in addition to a funder-facing framework. This might take the form of a shared Monitoring, Evaluation & Learning (MEL) playbook or a resource bank with best practices and templates. 

3. Explore the value of an independent evaluation agency in the talent space 

Just as GiveWell is a trusted evaluator for global health interventions, and Animal Charity Evaluators in the animal welfare space, an independent evaluator in the talent space (and potentially in the EA meta space at large) might be desirable. The recommendations from such an agency would not only guide how the more than USD 100 million spent annually in the space is allocated, but also strengthen the credibility of the talent ecosystem. 

That said, it's not obvious that such an agency would be desirable. One argument would be that historical data isn't the most important decision criterion. Another that TFO's theories of change might vary too much in their approaches, making it hard to make fair comparisons. A third that it wouldn't affect the funding allocation enough to justify its own costs. 

Still, we believe this idea warrants deeper investigation to assess its potential in more depth.   


We hope these findings spark further discussion among funders, TFOs, and other actors in the ecosystem. If you would be interested in contributing to the next phase of this work, please reach out to Emil or Patrick. While neither of us will have the capacity to continue leading the project, we’d be happy to share insights, material, and connections to help others take it forward.

 

A big thanks to David Moss for the help with the interview outline, and to Devon Fritz and Marieke de Visscher for the support with introductions to funders. 

36

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Thanks for doing this! 

Could you define TFOs? Based on your backgrounds, I'm guessing you mean community building organisations like EA Sweden, EA Netherlands, etc., and coaching/training/advising organisations like Successif, 80k, Talos, AIM, Tarbell, etc.?

While both of these sets of organisations are ultimately about helping talent make a difference, I think they have quite different theories of change, and therefore require different M&E systems. 

See my proposal below for how I think community building organisations should do things. 

An M&E Framework for EA Community Building Based on Community Capital

The proximate objective: Community capital

EA community building organisations are ultimately aiming for impact. But measuring final impact directly is nearly impossible for community builders. How do you attribute a career transition to AI safety, or a crucial research insight, or a new organisation being founded, to your intro fellowship or community event?

Instead, the proximate objective should be increasing community capital, defined as:

Community Capital = (Sum of individual career capital) × (Coordination ability)

This formula captures something important about how EA communities actually create value. Career capital - the skills, knowledge, credentials, and connections that enable someone to have impact - matters enormously. But a collection of capable individuals who don't coordinate is far less valuable than a community that can pool knowledge, collaborate on projects, and leverage each other's expertise. The multiplication relationship reflects that coordination acts as a multiplier: high coordination ability means individual career capital gets leveraged far more effectively.

For an EA national group like EA Netherlands, this means success looks like: growing the number of people with relevant career capital, increasing the average career capital per person, and strengthening the community's ability to coordinate effectively. Do this well, and impact should follow (via EA's broader theory that career capital directed at priority problems matters).

Measuring community capital: The annual survey approach

I think this can be best measured with an annual community survey that collects data on both components simultaneously.

Individual career capital can be measured through self-assessment questions:

  • Specific questions about skills, credentials, etc.
  • "How capable do you feel of doing high-impact work in your priority cause area?" (1-7 scale)
  • "How much have your skills/knowledge relevant to impact grown in the past year?"
  • "Are you on a career path you consider high-impact?"

Coordination ability is best measured through network questions borrowed from social capital research:

  • "List up to 10 EA NL members you've had meaningful interaction with in the past year"
  • "Of the people you listed, how many could you collaborate with on a project?"
  • "How many EA NL members do you trust to give you good advice on your work?"

These network questions serve multiple purposes. First, they give you objective data about who's connected to whom, rather than just subjective feelings about connectedness. You can map the actual network structure, identify clusters, and measure density. Second, they differentiate between mere acquaintance and genuine collaboration-readiness - knowing someone versus being able to work with them effectively.

Estimating community size

Apparently, we could then estimate total community size using capture-recapture methods. If survey respondents collectively name 150 unique people, but only 60 of those actually took the survey, the overlap pattern tells you what proportion of the community you're reaching. This lets you estimate:

  • Total active community size (N)
  • The engaged core (people who both responded and got named multiple times)
  • The periphery (people named but who didn't respond)

Combined with career capital measures, you now have all three components of the formula:

  • Sum of career capital ≈ N × average career capital from survey
  • Coordination ability ≈ function of network density, trust levels, collaboration-readiness
  • Community capital ≈ the product of these

Programme-level indicators

The annual survey tells you whether you're winning overall, but you need more frequent feedback on whether specific programmes are working. Programme-level indicators provide this:

  • Fellowships: completion rates, participant satisfaction, post-programme surveys
  • Events: attendance, quality ratings, new connections formed
  • Organiser support: organiser activity levels, events run
  • Digital infrastructure: Chat engagement, information sharing
  • Marcom: brand awareness/recall/sentiment, conversion rates, etc

These don't measure community capital directly, but they're leading indicators that tell you if you're on track between annual surveys.

Attributing changes to your programmes

Measuring community capital is one thing; showing that your programmes actually contribute to it is another. Three complementary approaches:

1. Cohort tracking: Survey fellowship participants before and after the programme. Survey attendees of major events like EAGx before and after. Track how their career capital and network connections change. This gives you programme-specific deltas, though you can't fully prove causation without control groups.

2. Attribution questions in the annual survey: Simply ask people which EAN programmes most increased their career capital or helped them build connections. This relies on self-reported attribution, which isn't perfect, but people generally have decent intuitions about what helped them.

3. Qualitative contribution analysis: Interview a sample of community members annually and ask them to tell the story of how they became more connected or capable. Code their responses for whether EAN programmes feature in their causal narratives. This captures unexpected pathways and avoids leading them toward giving you credit. We're experimenting with the QUIP methodology at the moment.

Realistically, you'd use all three: cohort tracking for major programmes, attribution questions in the annual survey, and some qualitative interviews.

Connecting community capital to impact

This is the hardest link in the chain. You can measure community capital and show your programmes contribute to it, but does community capital actually produce impact?

The honest answer: you can't measure final impact (lives saved, existential risk reduced) directly. You're relying on a theory of change with two key assumptions:

  1. EA's general theory that career capital directed at priority problems leads to impact
  2. Your theory that coordination multiplies individual effectiveness

What you can do is track intermediate outcomes that validate this theory:

  • Career transitions
  • Collaborative projects launched
  • Grants secured from EA funders
  • Research/writing produced
  • Organisations started

Then correlate these with community capital levels: do people with higher career capital and better networks achieve these outcomes more frequently? Do collaborative projects require the coordination infrastructure you've built?

What this system gives you

This M&E approach offers several advantages:

Practical: One annual survey gives you the core metrics, supplemented by programme-level data you're probably already collecting.

Actionable: The formula highlights where to invest. If career capital is low but coordination is high, focus on upskilling and recruitment. If career capital is high but coordination is low, invest in events and infrastructure.

Honest about limitations: It doesn't pretend you can measure final impact. Instead, it measures the proximate objective you actually control, while acknowledging the remaining uncertainty.

Theory-driven: It's based on an explicit model of how communities create value, not just a collection of metrics. This makes it easier to explain to funders and board members why you're measuring what you're measuring. 

  1. ^

    Hot take: right now I think most regions have high coordination but low career capital but unfortunately are spending waaaaaay more on coordination

Thanks for your comment James. I would define a TFO as any organisation whose explicit goal is to help people increase the impact they have through their careers. So yes, both meta EA groups and coaching and training organisations are included. I’ve now clarified this in the post too. 

While I agree the theories of change between different interventions and organisations likely differ substantially, I think a set of standardised outcome-related indicators is still both relevant and necessary. Just as different organisation's interventions in global health differ significantly, but their effects still can be estimated in comparable units like QALYs.

With that said, some organisations (e.g., national EA groups) will have additional outcome indicators that aren’t directly tied to talent pipelines or career transitions, just as your M&E framework illustrates. 

Thanks for clarifying! 

But at what level should that standardised set of outcome-related indicators operate? 

As you mention, we already have indicators for ultimate impact (QALYs, etc). And the indicators at the opposite end of the spectrum are pretty simple (completion rates, NPS, etc.). 

It feels like you're looking for indicators that occupy the space in between? Something like 80k's old DIPY metric or AAC's ICAP?

I thiiiiink both organisations tried these metrics and then discontinued them because they weren't so useful? 

Curated and popular this week
Relevant opportunities