Hide table of contents

Staff from CEA, GWWC, and EA Funds reviewed drafts of this report and provided helpful feedback. I’m particularly grateful for Max Dalton’s thoughtful engagement and openness to discussing CEA’s mistakes. HollyK provided extraordinarily beneficial editing and critiques; I highly recommend other EAs take advantage of her offer to provide editing and feedback. I also appreciate the contributions of other anonymous reviewers. In all cases, I remained anonymous to those providing feedback. Any mistakes are mine. 

 

Introduction

In this submission for the Red Teaming contest, I take a detailed look at CEA’s community building work over the years. One of my major findings is that program evaluations, especially public ones, have historically been rare. I offer evidence that this has resulted in problems persisting longer than they needed to and lessons not being learned as widely  or quickly as they could have been.

By sharing my in-depth evaluation publicly, I hope to improve the quality of information underlying current and future community building projects. I also offer specific suggestions for CEA and other community members based on my findings. With a better understanding of problematic patterns that have repeated, our community can identify ways to break those patterns and ideally create new, positive patterns to take their place.

In assessing CEA’s work, I evaluated each of its community building projects individually. To the best of my knowledge this is a significantly more thorough evaluation than has been done to date. For each project, I’ve provided a brief background and a list of problems. In the spirit of Red Teaming, my focus is on the problems. CEA’s community building work has of course had many significant benefits as well. However, those benefits are generally better understood by the broader community and offer fewer learning opportunities.

While conducting this analysis, I’ve tried to follow a suggestion found on the “Mistakes” page of CEA’s website (which includes a non-comprehensive list of mistakes the organization has made with a focus on those that have impacted outside stakeholders): “When you evaluate us as an organization, we recommend using this page, but also looking directly at what we've produced, rather than just taking our word for things.”  Many, but far from all, of the problems I discuss are mentioned on that page; in those cases I often quote CEA’s characterization of the problems. Based on the evidence I’ve collected, my impression is that the Mistakes page (and other places where CEA discusses its mistakes) generally understates the number, degree, and duration of CEA’s mistakes (so much so that I suggest the Mistakes page be radically overhauled or eliminated completely).

I’ve listed CEA’s projects in chronological order, starting with the oldest projects. While this means that some projects that no longer exist (and were run by long-departed management teams) are discussed before projects that are currently impacting the EA community, this approach helps illustrate how CEA’s community building work has, and hasn’t, evolved over time. I argue there has been a pattern of lessons that could have been learned from early projects persisting longer than they should have (even if the lessons seem to have eventually been learned in many cases). The chronological structure helps illustrate this point. Readers are of course welcome to use the table of contents to skip to the projects they want to learn about (or to focus on the Synthesis of Findings and Suggestions sections rather than the lengthy program evaluations).

I found consistent patterns while conducting my “minimal trust investigations” of CEA’s community building work, which I elaborate on later. In short:

  • CEA has regularly lacked the staff needed to execute its ambitious goals, leading to missed deadlines and underdelivering on commitments.
  • Meaningful project evaluations (especially public ones) have been rare, due to lack of capacity, lack of prioritization, and often a lack of necessary data.
  • Without meaningful evaluation, mistakes have been repeated across time and projects.
  • When discussing its mistakes publicly, CEA has typically understated their frequency, degree, and duration; more generally CEA has lacked transparency around its community building programs.
  • In many cases, the EA community has borne negative externalities resulting from these mistakes.
  • CEA’s current management team (in place since 2019) has made significant progress in reducing problems. However, current management continues to deprioritize public program evaluations, raising questions of whether these improvements are sustainable and whether the lessons that led to the improvements have been learned beyond CEA.

 

In interpreting these findings, and my broader analysis, I hope readers will bear in mind the following caveats.

  • Many/most of the problems discussed did not take place on the watch of current management. CEA has undergone significant management turnover, with 5 Executive Directors/CEOs between 2016 and 2019. Over the last 3.5 years, CEA has had stable leadership.
  • It is natural for an organization, especially an ambitious and maturing one, to exhibit problems. I don’t mean to be critical by pointing out that problems existed, though I do think criticism is warranted around the fact that those problems weren’t learned from as much or as quickly as possible.
  • While I generally believe that CEA has underutilized public program evaluations (historically and currently), I commend CEA for its support for the Red Team contest and the open and critical discourse the contest has encouraged.
  • My analysis is largely limited to public information (typically the EA Forum and CEA’s website), which is a shortcoming. Valuable information that I did not have access to includes (but is not limited to) internal CEA data or program evaluations, private discussions, and Slack channels.
  • I had to make judgment calls on what to include in this analysis and which projects constitute “community building”. I tried to strike a balance between including projects that provide learning opportunities and not making this analysis longer than it already is.

My hope is that this analysis leads to stronger EA Community Building projects and a stronger EA community. As EA attracts more people, more funders, and more attention, synthesizing and implementing past lessons is more important than ever.

 

Synthesis of findings

In this section, I’ll summarize the evidence supporting my assertion that problems in CEA’s community building work (often caused by lack of capacity) persisted longer than they needed to, in large part due to insufficient program evaluation. To do so, I’ll provide a case study on how this pattern played out across several specific projects. Then, I’ll offer a more comprehensive table synthesizing how the problems manifested across all the projects I looked at, the extent to which the problems are ongoing, and suggestions for addressing problems.

Case Study

EA Ventures (2016), EA Grants (2017-2020), and Community Building Grants (2018-present) collectively provide an excellent demonstration of the patterns I’ve observed. Due to their similarities (each was meant to inject funding into the EA community and EA Grants was explicitly framed as a successor to EAV) and timing (roughly one year in between each project launch), one might have expected lessons to have been incorporated over time. Instead, problems persisted.

EAV failed to distribute meaningful sums into the community, with lack of capacity appearing to play a role. By the time EAV was shuttered in late 2016, CEA had identified spreading its staff too thin as one of its “more-significant mistakes.” Despite multiple community requests, CEA never conducted a post-mortem on EAV. As such, it is not clear whether CEA ever identified EAV’s transparency problems, such as not providing the community with accurate information about the project’s status. The efficacy of the few grants EAV did make was never evaluated.

These problems persisted into EA Grants and CBG.

  • Both projects were significantly impacted by a lack of capacity
  • Each project granted much less money into the community than intended (~$2 million granted in 2018-19 vs. ~$6 million intended). Aggressive targets were announced for 2019 despite 2018 grantmaking falling well short of targets.
  • Each provided a series of unrealistic timelines for when applications would be open (starting in early 2018 for EA Grants and extending through early 2021 for CBG).
  • Each program missed publicly communicated timelines about performing a program evaluation. Neither program has published a program evaluation capturing its largest problems, nor published an assessment of its grantmaking.
  • When publicly discussing the mistakes of both projects, CEA has omitted some of their largest problems and understated the mistakes that were mentioned.
  • The problems both projects experienced (particularly the shortfall in grantmaking and the missed timelines) negatively impacted the broader community.

CEA’s current management team has made changes to reduce problematic aspects of these projects. EA Grants was shuttered in 2020, and the CBG program’s scope was radically reduced in mid-2021, easing capacity issues and making it easier to issue accurate communications.

However, earlier improvements were clearly possible. If EA Grants and CBG had properly incorporated lessons from EAV, or even their own early years, significant problems could have been avoided.

 

Patterns observed across projects

The following table contains data on which of CEA’s programs fit the patterns I’ve described, the current status of those patterns, and suggestions for improvements. It shows that while early projects exhibited serious problems, in recent years CEA has apparently increased its staff capacity, focused more on fewer projects, and more regularly met its public commitments (though it’s still not perfect). Less progress has been made on public evaluations of its programs or full acknowledgement of its mistakes.

Readers can click links for more details on how these patterns manifested in specific projects and my suggestions for going forward. 

 

PatternEvidence of IssueCurrent StatusRelated Suggestions
CEA has regularly lacked the staff needed to execute its ambitious goals, leading to missed deadlines and underdelivering on commitments.

Lack of capacity led to problems in:

*GWWC

*EA Grants

*online Groups platform

*EA Ventures

*EAGx organizer support

*EA Funds

*CBG program

Meaningful progress has  been made since 2019. Significant CEA staff growth, greater focus, and spinoffs of GWWC and EA Funds (and upcoming Ops spinoff) seem to have helped. Lack of capacity still seems to impact group work, and EA Funds' work post spinoff from CEA.

*Spin off operations

*Communicate details of CEA's improvements

*Place more value on experience

*Embrace more redundancy in community building

*Use targeted pilot programs

*Engage with governance questions

*Other community builders should prioritize areas neglected by CEA

 

 

Meaningful project evaluations (especially public ones) have been rare, due to lack of capacity, lack of prioritization, and often a lack of necessary data.

Program evaluations not published for:

*EA Ventures

*Pareto Fellowship

*GWWC (last impact report published in 2014)

*CBG Program

*Group support work

 

Lack of public grant evaluation for:

*EA Ventures

*EA Grants

*EA Funds

*CBG

CEA appears to have made progress in conducting internal evaluations. Current management prioritizing accountability to board and major funders, so public evaluations remain scarce.

*Publish internal EA Grants evaluation

*Prioritize new GWWC impact report

*Run experimental group support evaluation

*Invest in information architecture (esp. grant database)

*Publish summary of group support learnings

*Hire dedicated evaluation staff and publish evaluations

*Invest in community-wide metrics

Without meaningful evaluation, mistakes have been repeated across time and projects.Illustrated by breadth across projects and duration of other high-level problemsUnder CEA's current management, problems seem less persistent and a variety of positive steps have been taken to address longstanding issues. Some problems have reoccurred under current management, particularly around Group Support and cause representation (though improvements to the latter should be forthcoming).

*Hire dedicated evaluation staff and publish evaluations

*Have a meaningful mistakes page

*Engage with governance questions

When discussing its mistakes publicly, CEA has typically understated their frequency, degree, and duration; more generally CEA has lacked transparency around its community building programs.

Publicly understated problems:

*GWWC

*EA Global marketing

*Pareto Fellowship

*Group support

*EA Grants

*EA Funds

 

General lack of transparency:

*Communication of CEA strategy re: cause representation

*Pareto Fellowship

*EA Ventures

*EA Grants

*EA Funds

CEA's Mistakes page, while not meant to be comprehensive, does not include some of CEA's most significant mistakes and continues to understate some of the listed problems. CEA seems to have improved transparency to its board and major funders, but much less progress has been made on transparency to the community.

*Engage with governance questions

*Public dashboard of external commitments

*Have a meaningful mistakes page

*Explicitly and accurately communicate CEA's strategy

*Invest in information infrastructure

In many cases, the EA community has borne negative externalities resulting from these mistakes.

Negative impact on community from:

*Management of EA.org

*Cause representation in EA content

*EA Ventures

*CBG program

*Online group platform

*EA Grants

*Community Health

Past mistakes may still be having flow-through impacts on the community. While mistakes have been less frequent over the last few years (the most significant ongoing mistake has been around cause representation), these may also negatively impact the community.

 

*Greater emphasis on experience

*Embrace redundancy in community building

*Other community builders should prioritize areas neglected by CEA

*Publish learnings from group support work

*Engage with governance questions

*Use targeted pilot programs

CEA’s current management team (in place since 2019) has made significant progress in reducing problems.

Evidence of improvements:

*Reduced frequency/recurrence of problems

*Significant CEA staff growth

*CEA freeing capacity/focus by spinning off projects

Current management is not prioritizing public program evaluations, raising questions of whether these improvements from the last few years are sustainable and whether the lessons that led to the improvements have been learned beyond CEA.

*Invest in community-wide metrics

*Embrace redundancy in community building

*Invest in information infrastructure

*Have a meaningful Mistakes page

*Publish learnings from group support work

*Communicate details of CEA's improvements

 

In the following section, I elaborate on my suggestions for addressing the problematic patterns I’ve observed. Some suggestions I propose are for CEA specifically, while others are intended for other parts of the EA community or even the community as a whole. In all cases, my intention is to promote a stronger and more effective EA community.

Suggestions        

Note

As its title indicates, this section contains suggestions. While for the sake of brevity I may write that certain parties “should” do something, it would be more accurate to preface each suggestion with “Based on my analysis, I believe…” My goal is to offer ideas, not imperatives.

 

Suggestions for CEA

CEA should hire dedicated evaluation staff and prioritize sharing evaluations publicly

CEA’s program evaluations could be significantly improved with dedicated staff and a commitment to sharing evaluations publicly.

CEA has routinely failed to evaluate its community building projects or significantly delayed evaluations relative to timelines that had been shared publicly (this applies to Pareto Fellowship, EA Ventures, EA Grants, CBG, GWWC, and EA Funds). When CEA has offered an explanation, it has typically been that the evaluation took longer than expected, there was no available staff to work on the evaluation, or that other work had been prioritized instead.

A simple way to facilitate internal evaluation would be to hire dedicated staff (ideally a team rather than an individual) to work on Metrics, Evaluation, and Learning (MEL). The MEL lead role should be promoted as a long-term position to promote stability.

Having staff focused on MEL will break the pattern of having overwhelmed people trying to juggle both program management and evaluation of that program. It will also allow for greater specialization, continuity and consistency in evaluation techniques, and developing new projects in a way that facilitates subsequent evaluation. It would also be consistent with the ethos of EA: one of the main ideas listed on effectivealtruism.org is the notion that “We should evaluate the work that charities do, and value transparency and good evidence.”

CEA’s leadership tells me that they’ve been doing some internal program evaluations, but prioritize transparency with funders and boardmembers rather than sharing evaluations publicly.[1] This is one of my major cruxes with CEA. I’d hope that dedicated MEL staff would encourage CEA to share more evaluations publicly, which I view as critical for three reasons.

First, public evaluations promote learning. They provide community members with information to update their worldviews, which in turn allows them to operate more effectively. They can also help CEA learn as community members may generate new insights if given access to data. (For example, CEA tells me my analysis has uncovered issues they weren’t aware of, despite using only public information).

Second, public evaluations would promote better governance and accountability. As an organization that explicitly performs functions on behalf of the EA community, CEA’s accountability should not be limited to funders and boardmembers. CEA should provide the broader community with information to assess whether CEA is effectively executing projects it runs on their behalf.

Third, CEA is highly influential in the EA community. If CEA deprioritizes public evaluations, this behavior could become embedded in EA culture. That would remove valuable feedback loops from the community and raise concerns of hypocrisy since EAs encourage evaluations of other nonprofits.

 

CEA should publish a post describing the process and benefits of its expansion and professionalization

Other organizations and community members could learn valuable lessons from CEA’s progress in growing and professionalizing its staff.

My analysis shows that CEA has made significant progress in reducing the problems in its community building work. Much of this progress appears attributable to stability in the current management team and increased staff capacity and professionalization (including better board oversight). Many of CEA’s roles are now filled by staff with meaningful experience in similar roles outside of CEA, which has not always been the case.

While my report describes problems that have likely been more frequent and severe than commonly understood, the flip side of that coin is that alleviating those problems has been more beneficial than commonly understood. By publishing details of the benefits and process of its improvements, CEA could help other organizations avoid unnecessary pitfalls and leverage CEA’s experience. Helpful topics could include warning signs that professionalization is needed, tips for finding experienced candidates[2], advice on which aspects of professionalization are most important (e.g. leadership stability vs. experienced staff vs. engaged board), and which roles are most important to professionalize. The EA community has no shortage of young but growing organizations, and the whole community will benefit if they can develop by learning from, rather than repeating, the mistakes of others.

 

CEA should clearly and explicitly communicate its strategy

Someone engaging with CEA’s Strategy page (or other forums through which CEA communicates its strategy) should come away with a clear understanding of what CEA is, and isn’t, prioritizing.

CEA’s biggest historical problems in this area have been when CEA has managed community resources (e.g. effectivealtruism.org) and favored causes supported by CEA leadership at the expense of causes favored by the broader community. I’m optimistic CEA’s policies (and transparency around those policies) will improve going forward: CEA has shared a draft of a potential public post about this with me (which “was motivated (at least in timing)” by my comments on this topic). If CEA publishes that post and acts consistently with it, I would interpret that as a significant improvement on the status quo. (I have not listed this suggestion as “in progress” since it is unclear whether CEA will publish this post).

 

CEA should publish what it has learned about group support work and invest in structured evaluation

Funders, group leaders, group members, and EA entrepreneurs would all benefit from learning from CEA’s extensive experience in group support, and would learn even more from a more rigorous assessment of group work going forward.

CEA should synthesize and publicly share the data it has collected and the lessons it has learned from this work. This should be done in one or more standalone posts (past data has been shared in posts covering a variety of subjects making it hard to find). As part of this work, CEA should clarify what responsibilities it is taking on regarding group support (this list has undergone routine and significant changes over the years) and what opportunities it sees for others to perform valuable group work.

While CEA tells me it has shared its lessons with relevant stakeholders, I don’t believe these efforts have been sufficient. As a telling example, the head of One for the World (a group-based community building organization) has been quite vocal about wanting more information about CEA’s group support work. And if public data were easily accessible, the range of people who might engage in group work would likely be much larger than the people CEA currently shares data with.

Synthesizing and distributing lessons would be a good start, but more rigorous analysis going forward is sorely needed. Excellent ideas for how to conduct experimental or quasi-experimental evaluation have already been circulated and generated positive feedback. Now these ideas need buy-in from key stakeholders like CEA, and to be funded and executed.

 

CEA should have a meaningful mistakes page, or no mistakes page

CEA’s Mistakes page should give readers an accurate understanding of the nature, magnitude, and duration of its major mistakes.

When the Red Teaming contest was launched, Joshua Monrad noted:  

“In the absence of action, critiques don't do much. In fact, they can be worse than nothing, insofar as they create an appearance of receptiveness to criticism despite no actual action being taken. Indeed, when doing things like writing this post or hosting sessions on critiques at EA conferences, I am sometimes concerned that I could contribute to an impression that things are happening where they aren't.”

My impression of CEA’s Mistakes page, which I’ve referenced numerous times, is that it has been “worse than nothing.”[3] It has routinely omitted major problems (such as the failure of EA Grants, CBG, and EA Ventures to grant the amounts intended), significantly downplayed the problems that are discussed (such as the impact of under-resourcing GWWC and missed commitments around EA Grants and CBG), and regularly suggests problems have been resolved when that has not been the case (such as originally claiming that running too many projects was only a problem through 2016 and that EA Funds not providing regular updates and  not providing accurate financial data were only problems through 2019). If CEA is going to have a Mistakes page, it should accurately reflect the organization’s mistakes. If CEA is unable or unwilling to do so, it would be better to remove that page entirely.

 

CEA should consider creating a public dashboard of its commitments to others

A public record of CEA’s external commitments would be a valuable accountability mechanism.

Missed deadlines and commitments have been a recurring problem in CEA’s community building work, often creating difficulties for people, organizations, and funders trying to make plans around CEA’s activities. The prevalence of these missed commitments suggests a lack of accountability. CEA’s understatement of those missed commitments (such as in the EA Grants and CBG programs) suggests it is sometimes unaware of its commitments.

A public dashboard listing CEA’s commitments to the community could help in both regards. It would help the community, and CEA’s management, keep CEA accountable. Simply creating a dashboard wouldn’t ensure that every commitment is kept, but it would encourage CEA to either keep commitments or communicate that they wouldn’t be met as soon as possible.

 

CEA should consider using targeted pilot programs

Before running projects that serve the entire community, CEA should consider piloting them with narrow groups.

CEA faces a balancing act in its community building work. On one hand, it seems natural for that work to support the entire community, or at least for the entire community to be eligible to receive support. On the other hand, CEA may believe that certain subsets of the community would be particularly easy, or high value, to support.

Based on my analysis, I think CEA should strongly consider piloting new community building programs with narrow populations it expects to have the highest benefits and/or lowest costs. I’m ambivalent about this suggestion, as I think it's very valuable for CEA’s services to be widely accessible. However, the track records of the EA Grants and Community Building Grants programs show the merits of a narrow pilot approach.

In each case, the entire community was originally eligible for the program and CEA attempted to sustain this arrangement. But open eligibility strained CEA’s capacity, and both programs ended up narrowing their scopes significantly, via a referral round for EA Grants and a limited list of priority locations for the CBG program. If these programs had been piloted with the reduced scope they ultimately arrived at, applicant and staff time could have been used more efficiently. And if/when CEA subsequently decided to expand eligibility, it would have done so from a more informed place.

 

CEA should publish its internal evaluations of EA Grants

Other grantmakers (including individual donors) could learn from the assessments of EA Grants’ grantmaking that CEA internally produced but has not shared.

Nicole Ross (former head of EA Grants) conducted an initial grant review. Publishing a summary of her findings would dramatically improve the community’s knowledge about the program, and would likely provide valuable lessons for other grantmakers. The limited number of grants made by EA Grants relative to other grantmakers (and Ross’ prior work) should make this a tractable exercise, and one that could inform other efforts at retrospective grant evaluations (a topic with significant community interest and growing importance as EA has access to more funding).

While I would love to see an in-depth analysis published, even simple information would be quite informative. How many grants (in terms of number of grants and dollars granted) fell into Ross’ high level categories (“quite exciting”, “promising”, “lacked the information I needed to make an impact judgment”, and “raised some concerns”)? How were grants split across cause areas? Was there any relationship between the initial assessment and cause area? Did grants made through the referral round seems more, less, or similarly promising as grants made through other rounds?

Ideally, a grant assessment would include an analysis of whether the operational mechanics of EA Grants impacted grant quality. Two areas that seem particularly important in this regard are:

 

Suggestions for EA Community

The EA community should seriously engage with governance questions

The EA community should prioritize explicit conversations about how governance should work in EA.

Firstly, an explicit conversation about the roles CEA executes on behalf of the broader EA community (managing effectivealtruism.org seems like an obvious example) and the responsibilities and accountability CEA should have in these cases would be valuable (e.g. should the community have representation on CEA’s board?) This has been attempted in the past, but past attempts have not led to a lasting and transparent solution (in part due to turnover in CEA’s leadership).

Second, an explicit and transparent conversation about how governance should work in EA would be immensely valuable. What (if any) input should the community have on organizational priorities? What constitutes the community for these purposes? What mechanisms should be in place to promote good behavior amongst individuals and organizations? How should accountability be promoted? With its interest in “going meta”, the EA community should be well suited to engage with these questions.

The EA community would be well served by a governance model significantly better resourced and more transparent than the current model. Nonprofits are traditionally governed by boards, but boards might not be the best model for EA. As Holden Karnofsky observes “boards are weird” and the current board structures appear under-resourced relative to their oversight responsibilities.

For example, CEA UK’s board has six members, each with significant other responsibilities. The board’s responsibilities include not only overseeing CEA’s wide and growing portfolio of programs, but also overseeing the organizations that are legally a part of CEA (e.g. 80,000 Hours, Giving What We Can, EA Funds, and the Forethought Foundation). The planned spin-off of CEA’s operations department, which supports these disparate organizations, provides an excellent opportunity to rethink governance structures.

 

EAs should invest in community-wide metrics

The EA community, which places a high value on evidence and data, should invest more in self-measurement.

In my opinion, the best model would be for one organization to have explicit responsibility, and commensurate resources, for developing and measuring community-wide metrics. Ideally this effort would aggregate disparate data sources which have typically been owned by different parties, looked at individually[6], and which have their own weaknesses (e.g. the EA Survey is a rich and valuable data set but is prone to selection bias).

Community-wide metrics should seek to answer critical questions (and raise new ones) such as: How big is the EA community? What factors cause people to join the community? How many people drop out of EA? What drives that attrition? What causes people to stay engaged? What can we learn from people who are value-aligned with EA but are not actively involved in the community? Attempts have been made to answer some of these questions, but they have generally been limited by a lack of data sharing across organizations (and possibly by a lack of analytical capacity).

While CEA is obviously an important source of data (e.g. web analytics on effectivelatruism.org), I doubt it is the right organization to own these efforts. Rethink Priorities, which has experience investigating important community questions, would be a natural candidate; other teams (including newly formed ones) should also be considered.

 

The EA community should embrace humility

The prevalence of mistakes I’ve observed underscores the value of adopting a humble attitude.

My analysis clearly shows that EA projects can be difficult to execute well. Having a smart, dedicated, value-aligned team with impeccable academic credentials does not guarantee success. CEA’s track record of taking on too many projects at a time, with negative effects for the rest of the community, is an example of the problems that can arise from overconfidence.

Two specific areas where I think EAs could benefit from greater humility are: 

  • Greater willingness to acknowledge the scope and impact of mistakes the EA community has made. I found it both telling and worrisome that a discussion of a perceived slowdown in EA’s growth largely ignored the significant role mistakes played. This strikes me as a very dangerous blind spot.
  • Embracing public project evaluation and feedback loops more generally. Evaluating projects or grants comes with a cost, but I think the EA community is too quick to simply assume its own work is operating as intended. Without feedback loops, it’s hard to perform any kind of work well. I worry that many EAs undervalue and underutilize these feedback loops, and that as EA shifts towards longtermism those loops will become longer and rarer. I’m also concerned that EA will lose credibility if it promotes the evaluation of other charities while applying a different standard to EA organizations.

 

EA funders and entrepreneurs should prioritize areas unserved by CEA’s community building work

CEA has limited capacity and focus, making it imperative that other actors in the community complement the areas CEA is prioritizing.[7]

An example of this happening in practice is the EA Infrastructure Fund accepting applications for paid group leaders outside CEA’s narrow priority locations, though without up to date grant reports it isn’t clear the extent to which the EAIF is truly filling this gap.

To my mind, the area most overlooked by existing community building work is mid/late/post-career EAs, i.e. everyone except for students and early career professionals. CEA is explicitly not focusing on this market, and 80k’s work is also oriented toward younger EAs. This isn’t the only area where the EA community is “thin” but I’d argue it is easily the most important: literally every EA already is, or will be in the future, older than the age groups currently being prioritized. Ignoring these demographics is a retention problem waiting to happen. 

I would love to see the EAIF and other funders circulate a request for proposals for community building projects serving older EAs (and/or other underserved areas) and dedicate a meaningful amount of funding for those areas. This would have an added benefit of increasing the number of EAs with significant work experience.

 

EAs should invest in publicly available information infrastructure/architecture

Information that would lead to a better understanding of the EA community is often difficult (or impossible) for community members to access.

A particularly valuable piece of infrastructure would be a grants database facilitating analysis across grantmakers and grantmaking programs. Various community efforts (e.g. here and here) have made some progress in aggregating grant data, aided by the excellent grant databases Open Phil and FTX already provide. Other grantmaking programs past and present (e.g. EA Funds,  EA Grants, and Community Building Grants) do not offer public data amenable to easy analysis. For example, EA Funds’ grantmaking is listed on the relevant webpages, but this information is both out of date and formatted in a way that makes analysis impossible without extensive data entry. (My understanding is that EA Funds will soon provide a grants database which should be a significant improvement on the status quo).

A unified grants database, or separate but compatible databases across grantmakers, would provide a better understanding of where resources are (and aren’t) allocated. It would also make it easier to assess whether grants (or grantmaking programs[8]) are having their intended purpose. In analyzing CEA’s work, I found little in the way of post-grant evaluation despite consistent community interest (“retrospective grant evaluations” was the most upvoted submission in the Future Fund’s project ideas competition.)

Another valuable data resource would be exportable and searchable databases with information on groups and individuals. This would make it easy to learn how many groups exist, where they are located, how many members they have, and which have paid organizers. It would also provide a better understanding of how many EAs are involved in the community, where they are located, and how they engage with EA.[9]

 

EA should embrace more redundancy in community building efforts

EA has developed to a point where having multiple people/organizations working in the same area is a feature, not a bug.

In this report I’ve identified numerous examples of community building projects failing to meet their goals. Just looking at CEA’s grantmaking projects, EA Ventures, EA Grants, EA Funds, and CBGs have all at times delivered less funding than expected to the EA community.

When multiple organizations provide similar functions, it’s not too problematic if one fails to deliver. Historically, though, the EA community has favored specialization over competition. Perhaps this was appropriate, given that EA was young (and often underfunded). But with a more mature EA community, more funding in place, and higher stakes if a critical function isn’t being met, EAs should on the margins be open to more competition/redundancy. Specialization and competition both have their merits, but EA is currently too reliant on the former.

A related concern is that when multiple candidates could work on a project, EA funders are too quick to assume that CEA is the best option; this seems to be one reason why the EA Hub was recently shuttered.[10] But a better understanding of CEA’s spotty track record in actually executing the community building projects it undertakes (which I hope this report encourages) would lead to different conclusions.

 

EA organizations and individuals should on the margins place greater value on experience and ability to execute

My research suggests EA organizations and individuals undervalue relevant job experience.

I’ve collected evidence of numerous problems, many of which have persisted long after having been originally identified. These problems likely had a variety of causes, yet I’m confident none can be attributed to CEA staff being insufficiently value-aligned. In many cases, it seems quite reasonable to think that if CEA staff had more domain specific expertise (such as project management experience to help keep the number and scope of projects realistic), many of these problems could have been avoided or mitigated. I’ve also observed major improvements when CEA landed on a stable management team and started to professionalize in earnest, particularly in its operations work.  

This is a complex issue (this excellent piece, which I largely agree with, does a good job of exploring some of the nuances). But I think on the margins EA organizations generally undervalue domain specific experience, and EA job applicants are generally too averse to acquiring such experience at jobs outside of EA (and overly incentivized to pursue direct work).

 

Suggestions currently being implemented

 

CEA should consider spinning off its operations team (in progress)

Spinning off its operations team (currently in progress) will allow CEA to focus on its core priorities.

Historically, one of CEA’s biggest problems has been trying to do too many things and spreading the organization too thin. This made me concerned that by taking on operations management for a variety of disparate organizations[11], CEA was repeating past mistakes. In early drafts, I suggested CEA spin off its sizable (20 FTE) operations team.

When CEA reviewed these drafts, I was very pleased to learn that this spin-off was already in progress. CEA will announce additional details in the future. I view this as a positive sign that past mistakes are being learned from, which I don’t believe has always been the case. As mentioned elsewhere, I hope CEA uses the spin-off as an opportunity to think about optimal governance models for the affected organizations. 

 

GWWC should prioritize publishing an updated impact report (in progress)

GWWC’s last impact report is extremely out of date, and producing a new one will provide useful information for understanding GWWC and the EA community as a whole.

GWWC has told me they are currently working on an updated report. This will provide helpful information on GWWC’s impact as an organization, on the value of a new pledger (a useful data point for assessing the value of a new EA), and attrition rates for past pledgers.

 

Project Evaluations

 

Background

Giving What We Can is a worldwide community of effective givers founded in 2009. GWWC promotes a lifetime 10% giving pledge, as well as a “Try Giving” program for those who find the pledge too large a commitment. The GWWC community has over 7,000 members worldwide.

In July 2020, Luke Freeman was hired to lead EA Funds, and in December 2020 CEA announced that GWWC would “operate independently of CEA” though “CEA provides operational support and they are legally part of the same entity.”

 

Understaffing and underinvesting in GWWC

CEA significantly under-invested in GWWC, causing the EA community to be smaller than it could have been and effective charities to receive less money and achieve less impact.

CEA’s Mistakes page acknowledges under-investing in GWWC, but suggests that only minor problems resulted from it. It ignores the more substantive problem: that a lack of capacity likely contributed to slower growth than GWWC would have otherwise experienced, with knock-on implications for the rest of the EA community.

 

 

Lack of transparency into cause of slowdown in pledge takers

CEA has offered conflicting explanations for a slowdown in GWWC pledge taking.

A September 2018 post by Rob Wiblin found CEA’s deprioritization of GWWC slowed GWWC’s membership growth by 25-70% depending on what one assumes about the previous trajectory. The head of GWWC at the time commented on that post, and did not object to blaming GWWC’s deprioritization in early 2017 for the slowdown.

However, CEA’s public communications throughout 2017 made no mention of deprioritizing GWWC and instead suggested it was actively being worked on.[12] And CEA’s 2017 Review explicitly attributed GWWC’s slowdown to “our change from emphasizing recruitment of new members to emphasizing the Pledge as a serious lifetime commitment to be thoroughly considered.”

Deprioritizing GWWC or emphasizing the gravity of the pledge (or a combination of the two) could both have plausibly caused a slowdown. However, these explanations have radically different implications for other community building efforts. CEA’s failure to be transparent about the primary cause of the slowdown meant that important lessons about community building were missed.

 

Lack of program evaluation

GWWC’s most recent impact report is extremely out of date, having been published in 2015[13] based on donations from 2009-2014.

However, the findings of that report inform much more recent decisions. For instance, the webpage for the EA Infrastructure Fund used to cite the report’s finding of a 6:1 leverage ratio, and in discussing CEA’s plans for 2021 CEO Max Dalton cited the report’s estimated lifetime value of a pledge of $73,000. Much has changed since the report was written[14] making it disappointing that out-of-date data is still being relied on.

Another important data point with broad community relevance is the attrition rate of GWWC pledgetakers, which I don’t believe CEA/GWWC has studied since 2015. Rethink Priorities, as part of its work on the EA Survey, examined the issue in 2019 and found “~40% of self-reported GWWC members are not reporting donation data that is consistent with keeping their pledge -- far more pledgers than GWWC originally reported based on data ending in 2014.” [NB: the original estimate was ~6%.] This analysis was repeated in 2021, with results that were “a bit more pessimistic.”

Given how relevant this data is to understanding EA retention, GWWC’s failure to conduct a more recent and more thorough analysis is a missed learning opportunity. Fortunately, GWWC is apparently currently working on updating its impact assessment.

 

Problematic marketing of the GWWC pledge

CEA has at times marketed the GWWC pledge in inappropriate ways.

CEA’s Mistakes page acknowledges that from 2014-2017 “We encouraged student groups to run pledge drives which sometimes resulted in people taking the Pledge at a young age without being encouraged to seriously think it through as a long-term commitment. Some of our communications also presented the Pledge as something to be taken quickly rather than carefully considered.”

 

Content Creation and Curation (2015-present)

Background

CEA is responsible for creating and curating EA content in a variety of contexts, including cases where CEA is effectively managing community resources. Examples include EA Global (which CEA has been running since 2015), the EA Handbook (CEA produced the 2nd and 3rd editions), and effectivealtruism.org (which CEA has been operating since 2015).  EA Global and effectivealtruism.org are arguably two of EA’s most prominent platforms.

These projects are operated across disparate teams at CEA, but I’m aggregating them for simplicity and brevity.

 

Problems

Lack of transparency around CEA’s strategy

CEA has lacked transparency around its strategy for representing different cause areas.

CEA’s staff and leadership has for quite some time favored longtermist causes, more so than the community at large. Content that CEA has created and curated has often skewed heavily toward longtermist causes (see Problematic representation in EA Content section for more details). This strategy has not always been made clear, and in a December 2019 comment thread Max Dalton (CEA’s Executive Director) acknowledged that “I think that CEA has a history of pushing longtermism in somewhat underhand ways… Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.”

Dalton’s most detailed descriptions of this thinking on this topic have come in obscure comment threads (here and here) in which he describes a general desire to promote principles rather than causes, but “where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don't think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).” (He recently retracted this comment, noting that he now thinks the 70%-80% figure should be closer to 60%.)

As one EA noted at the time, the lack of transparency around this decision was incommensurate with its considerable importance.[15] This topic warrants an explicit and public discussion and explanation of CEA’s policy, and should not be relegated to comment threads on only marginally related posts. I find it notable that someone reading CEA’s strategy page at the time these comments were written would likely come away with a very different understanding of CEA’s approach.[16]

I’m happy to report that Dalton has shared with me a draft of a post he may publish on this topic. I hope he does choose to publish it, as I think it would represent a significant improvement in CEA’s transparency. While I disagree with some details of the draft (for instance, I share concerns others have previously voiced about various biases inherent in deferring to cause prioritization experts) I’m glad to see CEA listening to community concerns and considering more transparency about its strategy.

 

Problematic representation in EA content

CEA has repeatedly used community forums to promote its own views on cause prioritization rather than community views.

CEA’s Mistakes page notes “we’ve carried out projects that we presented as broadly EA that in fact overrepresented some views or cause areas that CEA favored. We should have either worked harder to make these projects genuinely representative, or have communicated that they were not representative.” The page provides several specific examples that I’ve listed below, along with additional context where relevant.

  • “EA Global is meant to cover a broad range of topics of interest to the effective altruism community, but in 2015 and 2016 we did not provide strong content at EA Global from the area of animal advocacy…This made some community members who focus on animal advocacy feel unvalued.”
    • NB: Another significant reason why members who value animal advocacy felt unvalued is because factory-farmed meat was served at EA Global 2015. This post describes the situation, which troubled many people as this Facebook discussion makes clear. Since then, CEA has only provided vegetarian (and mostly vegan) food at EA Global.
  • “In 2018, we published the second edition of the Effective Altruism Handbook, which emphasized our longtermist view of cause prioritization, contained little information about why many EAs prioritize global health and animal advocacy, and focused on risks from AI to a much greater extent than any other cause. This caused some community members to feel that CEA was dismissive of the causes they valued.”
    • NB: In response to negative feedback on the EA Forum (feedback was even more critical on Facebook), Max Dalton (author of the second edition handbook and current Executive Director of CEA) announced plans to add several articles in the short-term; these do not appear to have ever been added. The release of the 3rd Edition of the Handbook was then delayed due to CEA’s change in leadership.
  • “Since 2016, we have held the EA Leaders Forum, initially intended as a space for core community members to coordinate and discuss strategy. The format and the number of attendees have changed over time, and in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview. While the name is less significant than the structure of the event itself, we should not have continued calling it the “EA Leaders Forum'' after it no longer involved a representative group of EA community leaders.”

 

CEA’s Mistakes page omits several other (including more recent) problems with representativeness:

  • The In-Depth EA Program includes topics on biorisk and AI but nothing on animals or poverty in its topics for discussion. This content is framed as EA content (rather than CEA’s organizational views).
  • Starting in late 2017 and extending to March 2021 (when I called this issue to CEA’s attention), the Reading List on the effectivealtruism.org homepage included content on a variety of longtermist causes but not Global Poverty (which was the community’s most popular cause at the time per the EA Survey).
  • I’ve argued that the 3rd Edition of the EA Handbook has a skewed cause representation (though not as bad as the 2nd Edition). The 4th Edition, recently soft-launched, looks like a significant improvement on the 3rd Edition.
  • For several years, effectivealtruism.org/resources (one of the main pages on that site) heavily prioritized longtermist content relative to global health and animal welfare. For instance, the “Promising Causes'' section listed two articles on AI and one on biosecurity before mentioning animal welfare or global health; moreover, that section came after a section dedicated to “The Long-Term Future”. This page was updated in early 2022, and is now more representative.

 

Inattention to, and lack of representation on, effectivealtruism.org

CEA manages effectivealtruism.org (the top search result for “effective altruism”) on behalf of the community, but has only recently made it a priority.

For example, from late 2017 to early 2022, the homepage had only very minimal changes.[17] In early 2022, CEA revamped the site, including a major redesign of the homepage and navigation, and published a new intro essay.

CEA also hasn’t shared information about effectivealtruism.org that could be helpful to the rest of the community. Examples include traffic to the site, which pages receive the most (and least) visitors, and donations received from visitors the site refers to various donation platforms (EA Funds, GiveWell, etc.)

 

Lack of EA Global admissions inbox monitoring

In 2021, CEA ignored admissions-related emails for five weeks. As described on CEA’s Mistakes page:

“In the lead-up to EA Global: Reconnect (March 21-22, 2021), we set up an inbox for admissions-related communications. However, the team member who was responsible for the inbox failed to check it. The mistake was discovered after 36 days, a week before the event. While we responded to every message, many people waited a long time before hearing back, some of them having sent follow-up emails when they didn’t receive a timely response.”

 

Poor communication with EAGx organizers

EAGx organizers have been hampered by CEA’s unresponsiveness and lack of professionalism.

CEA’s Mistakes page acknowledges: “At times, our communication with EAGx organizers has been slow or absent, sometimes impeding their work. For example, in 2016 EAGxBerkeley organizers described unresponsiveness from our staff as a low point in their experience as event organizers.”

Feedback from the 2016 EAGxBerkeley organizers indeed flagged unresponsiveness as a major problem (“There were multiple instances where Roxanne did not respond to our messages for days, and almost every time [she] said [EA Outreach, part of CEA] would do something for us, that thing would not be done by the time they said it would be done.”) However, they also described broader problems including CEA creating artificial constraints[18], a lack of CEA capacity[19], and a general lack of oversight.[20]

Given the scope of these communication issues with the 2016 EAGx organizers, it’s troubling that CEA describes these problems as persisting through 2019.[21] Since late 2021, CEA has had someone working full-time on the EAGx program, and CEA tells me that the degree of support and satisfaction is generally much higher than it was.

 

Problematic marketing of EAG 2016

CEA’s marketing of EAG in 2016 received substantial community criticism for violating community standards and values.

Some of that criticism related to the frequency of emails; one EA reported “during the final month or so I got up to three emails per day inviting me to EA Global.” Other criticism related to marketing that seemed “dishonest” and/or “dodgy”. Community comments include (all emphasis added):

  • Dishonest elements in the marketing beforehand seemed destructive to long-term coordination… I switched from 'trust everyone at CEA except...' to 'distrust everyone at CEA except...', which is a wasteful position to have to take… dodgy emails convinced approximately -1 of the 12 people I nominated to attend, and now some of my friends who were interested in EA associate it with deception.” (source)
  • “I confess I find these practices pretty shady, and I am unpleasantly surprised that EAG made what I view to be a fairly large error of judgement on appropriate marketing tactics.” (source)
  • “I didn't end up nominating anybody because I'd rather reach out to people myself. The "via EAG" thing makes me really relieved that I made this choice and will prevent me from nominating people in the future. I'm actually a bit surprised at the strength of my reaction but this would've felt like a major violation to me. I really dislike the idea of feeling accountable for words that I didn't endorse..  After your explanation the practice still does seem (very) deceptive to me.” (source)
  • “I'd recommend all EAs avoid in the future:
    • Sending emails 'from' other people. Friends I recommended received emails with 'from' name 'Kit Surname via EAG'. Given that I did not create the content of these emails, this seemed somewhat creepy, and harmed outreach.
    • Untruths, e.g. fake deadlines, 'we trust Kit's judgement', 'I was looking through our attendee database', etc. (My vanity fooled me for a solid few seconds, by the way!)” (source)

 

Poor communication around EAG events

CEA has been unclear about EA Global admissions criteria and dates, leading to community frustration and missed attendance.

CEA’s Mistakes page acknowledges: “As EA Global admissions criteria have changed over time, we have not always communicated these changes adequately to the community, leading to confusion and disappointment.” This confusion seems to have started following EAG 2016 (which courted a large audience via aggressive marketing) while subsequent events were more selective. CEA also notes “In the years since, there has continued to be disagreement and confusion about the admissions process, some of it based on other mistakes we’ve made.”

Other confusion has been driven by a failure to announce conference dates in a timely fashion. When CEA announced high level plans (but not dates) for EAG 2017 in December 2016, one EA noted “the sooner you can nail specific dates, the better!” because “that has always been a huge hurdle for me in past years and why I've been unable to attend prior conferences.” In February 2017, two other EAs requested the dates and described how not knowing them was interfering with their plans and ability to attend. While a separate post announcing dates was released in early March 2017, the February comments weren’t responded to until late March.

 

Pareto Fellowship (2016)

 

Project background

The Pareto Fellowship took place in the summer of 2016. Per its website, it was meant to provide “training, room and board in the San Francisco Bay, project incubation, and career connections for Fellows to pursue initiatives that help others in a tremendous way.” While the Pareto Fellowship was “sponsored by CEA and run by two US-based CEA staff… the location and a significant amount of the programming were provided by Leverage Research / Paradigm Academy.”

In 2021, one of the Fellows described the program as follows:

There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.

[Fellows] stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings.

Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.

In December 2016, CEA announced the discontinuation of the Pareto Fellowship

 

Problems

Severe lack of professionalism

Various aspects of the fellowship were disturbing to participants, including an interview process described as “extremely culty” and “supremely disconcerting”.

Nearly 500” applicants and “several hundred” semi-finalists interviewed for the fellowship; two years later anonymous accounts described this process as problematic (emphasis added):

“I was interviewed by Peter Buckley and Tyler Alterman [NB: Pareto co-founders] when I applied for the Pareto fellowship. It was one of the strangest, most uncomfortable experiences I've had over several years of being involved in EA. I'm posting this from notes I took right after the call, so I am confident that I remember this accurately.

The first question asked about what I would do if Peter Singer presented me with a great argument for doing an effective thing that's socially unacceptable. The argument was left as an unspecified black box.

Next, for about 25 minutes, they taught me the technique of "belief reporting". (See some information here and here). They made me try it out live on the call, for example by making me do "sentence completion". This made me feel extremely uncomfortable. It seemed like unscientific, crackpot psychology. It was the sort of thing you'd expect from a New Age group or Scientology.

In the second part of the interview (30 minutes?), I was asked to verbalise what my system one believes will happen in the future of humanity. They asked me to just speak freely without thinking, even if it sounds incoherent. Again it felt extremely cultish. I expected this to last max 5 minutes and to form the basis for a subsequent discussion. But they let me ramble on for what felt like an eternity, and there were zero follow up questions. The interview ended immediately.

The experience left me feeling humiliated and manipulated.

        

Responding to this comment, another EA corroborated important elements (emphasis added):

I had an interview with them under the same circumstances and also had the belief reporting trial. (I forget if I had the Peter Singer question.) I can confirm that it was supremely disconcerting.

At the very least, it's insensitive - they were asking for a huge amount of vulnerability and trust in a situation where we both knew I was trying to impress them in a professional context. I sort of understand why that exercise might have seemed like a good idea, but I really hope nobody does this in interviews anymore.

 

Questionable aspects of Leverage/Paradigm’s norms and behavior extended beyond the interview process. In some circles it is “common knowledge” that in the Leverage/Paradigm community, “using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of [Leverage]. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.”

Responding to that characterization, one fellow recounted that “The Pareto program felt like it had substantial components of this type of social/psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.” This apparently affected how fellows perceived the program.[22] 

The Pareto Fellowship was unprofessional in more mundane ways as well. CEA now reports that “A staff member leading the program appeared to plan a romantic relationship with a fellow during the program.” CEA is unaware of any harm that resulted, but  acknowledges that “due to the power dynamics involved” this was “unwise” and “may have made some participants uncomfortable.” CEA’s characterization of the mistake understates its gravity, as it omits the  context that the parties were living in the same house (which also served as the program location) and that the fellow was not aware of this arrangement beforehand (and presumably did not have alternative lodging options if they were uncomfortable).

 

Lack of program evaluation, and missed commitments to conduct one

CEA explicitly committed to publishing a program review, but did not deliver one.

Various EAs requested that CEA conduct a post-mortem of the Pareto Fellowship[23] to help the community learn from it. In its 2017 Fundraising Document CEA promised “a detailed review of the Pareto Fellowship is forthcoming.” In response to a comment that “multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run” CEA staff reiterated that an evaluation was “forthcoming”, but it was never published.

 

Failure to acknowledge degree or existence of problems

A recurring theme in the Pareto Fellowship’s problems has been CEA’s disinclination to acknowledge the extent (or even the existence) of the program’s problems.

CEA has never publicly acknowledged that it committed to, but did not deliver, a program evaluation. The problems with the interview process were posted on the Forum in summer of 2018, but CEA did not add them to its Mistakes page until 2020.

The observations I quoted earlier from a former fellow were comments on a September 2021 LessWrong post describing problematic “common knowledge” about Leverage. Julia Wise from CEA’s community health team responded that “CEA regards it as one of our mistakes that the Pareto Fellowship was a CEA program, but our senior management didn't provide enough oversight of how the program was being run” and apologized to “participants or applicants who found it misleading or harmful in some way.”

Roughly two weeks later a former Leverage member published a disturbing account of her experiences during and after being a part of that community (including highly problematic handling of power dynamics), prompting a question in the aforementioned thread of why CEA’s Mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. Wise responded that “we're working on a couple of updates to the mistakes page, including about this.” In January 2022, the Mistakes page was updated to include references to Leverage’s role in the Pareto Fellowship, the attempted romantic relationship (the first public mention of said relationship), and a section providing “background on CEA’s relationship with Leverage Research.”

 

Lack of program transparency

Very little public information about the Pareto Fellowship exists.

CEA announced that there were 18 Pareto Fellows and that “the program created 14-15 significant plan changes”. But the identities of the fellows, or the nature of their projects (each intended as “far more than an internship”) were never published.

 

EA Ventures (2015)

 

Project background

EA Ventures (EAV) was launched in February 2015 as “a project of CEA’s Effective Altruism Outreach initiative.” Its goal was “to test the theory that we can stimulate the creation of new high impact organizations by simply signaling that funding is available.”

Projects looking for funding were invited to “apply and go through a systematic evaluation process” after which the EAV team would “introduce projects that pass the evaluation to our network of individual and institutional funders and help find new funders if needed.” The EA Ventures homepage listed over 20 funders, including major funders like Jaan Tallinn and Luke Ding.

EAV went on to release a list of projects and projects areas that they would “like to fund” and received over 70 applications before their first deadline. By the end of 2015, EAV had “received around 100 applications and had helped move around $100,000 to effective organizations”. However, as discussed below, minimal information is available about these grants.

The project was closed in 2016.

[Note: updated in July 2023 to reflect that EAV was launched in 2015, not 2016.]

 

Problems

 

Granting negligible funds relative to expectations and resources invested in project

EAV failed to meaningfully connect projects with funding.

The first project it funded was EA Policy Analytics, which received $19,000. A paper by Roman V. Yampolskiy thanks EA Ventures (among others) for “partially funding his work on AI Safety.” I’ve found no other records of specific grants that EAV facilitated, nor any post-grant evaluations of grants that were made.

The ~$100,000 distributed pales in comparison to the resources invested in EAV. Four staff members were involved in the project (though they had other responsibilities as well). These staff need to spend significant time building relationships with funders, developing an advisory board, planning the project, and evaluating and assessing the ~100 applications that were submitted. Each of those applications also required time and energy from the applicants. The evaluation process itself seems very time consuming to develop and implement in process, especially relative to the amount of money ultimately distributed.[24] 

 

Lack of program evaluation

Despite repeated requests from the EA community (e.g.  here, here, and here), a proper post-mortem on EAV has never been published.

When piecemeal evaluations have surfaced, they’ve offered conflicting evidence as to why EAV failed. In a 2017 comment thread, EAV co-founder Kerry Vaughn wrote: “We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.”

Points 1 and 2 suggest that a lack of projects worth funding was the problem. However, in a June 2015 project update Vaughan wrote that both the number and quality of applications exceeded his expectations[25] In late 2015, the EAV team again indicated that they thought the project was going well and warranted further investment.[26]

In June 2015, Vaughan also noted that the team was “mostly bottlenecked on time currently” due to competing projects (related to the third point he later raised) and expressed interest in finding someone to help with evaluations. Three people commented offering to help; there is no evidence the EAV team responded to those offers.

Vaughan has also suggested in 2017 that “Part of the problem is that the best projects are often able to raise money on their own without an intermediary to help them. So, even if there are exciting projects in EA, they might not need our help.” That explanation seems quite different from the original three reasons he supplied; it also seems easy to prove by listing specific high quality projects that applied to EAV but were instead funded by others.

Personally, I (and others) suspect the main reason EAV failed is that it did not actually have committed funding in place. At an EA Global 2017 panel called “Celebrating Failed Projects” Vaughan confirmed that played a major role, saying a “specific lesson” he’d learned from EAV was “if you’re doing a project that gives money to people, you need to have that money in your bank account first.”

 

Lack of transparency and communication regarding program status

Due to EAV’s lack of transparency community members made decisions based on faulty information.

I know of one EAV applicant who was told in early 2016 that EAV would not be funding projects for the foreseeable future. Public messaging did not make this clear. In an August 2016 EA Global talk, Vaughan discussed EAV as an ongoing project. As late as October 2016, the EAV website did not indicate the project was stalled and CEA’s 2016 Annual Review made no mention of EAV. There were never any posts on the EA Forum or the EA Facebook group indicating that EAV had closed.

The first acknowledgement of the closure I’ve seen was the aforementioned February 2017 Forum comment. Since the EA community was not informed that EAV was not providing material funding to projects, nor when the project had shuttered, community members were left to operate with faulty information about the availability of funding for projects.[27] 

 

Group support (2016- present)

Background

CEA’s earliest group support took place through supporting GWWC chapters (though legally part of CEA, GWWC operated independently until 2016) and EA Build (a long-defunct CEA project that supported groups). Since GWWC is discussed separately and I found no meaningful information about EA Build, this section focuses on CEA’s group support after 2016.

Since then, CEA has provided support for local and university groups in a variety of ways. These include providing financial support for group expenses and (starting in 2018 through the Community Building Grant or CBG program) salaries for select group organizers.

CEA also helps produce the EA Group Organizers newsletter, has run retreats for group leaders, and provides a variety of online and human resources to help groups operate. In 2021 CEA started running a group accelerator and virtual programs, significantly narrowed which groups were eligible for the CBG program, and discontinued its Campus Specialist program.  

 

Problems

 

Poor communication and missed commitments (and minimizing these mistakes)

CEA has routinely missed deadlines and other commitments related to group support, making it hard for other community members to plan and operate effectively.

CEA’s mistakes page acknowledges a single instance of missing a deadline for opening CBG applications[28]; this missed deadline was actually part of a pattern.

In November 2018, CEA announced plans to run a round of CBG applications in “summer 2019”. In October 2019, CEA acknowledged missing that deadline and announced plans for “rolling applications” rather than scheduled application rounds. This new process lasted less than a year: in July 2020 CEA provided an update that “We'll temporarily stop accepting new applications to EA Community Building Grants from the 28th of August” and announced plans to re-open applications “around January 2021.” This deadline wasn’t met, and when an update was finally provided in March 2021, the only guidance given was “we will give an update on our plans for opening applications by June 1st.” CEA met this deadline by announcing in May 2021 that “The Community Building Grants (CBG) programme will be narrowing its scope to support groups in certain key locations and universities.”

This poor communication made it hard for groups and group organizers to make meaningful plans related to the CBG program, especially for groups outside the “key locations and universities” that CEA ultimately decided to support. Even other funders were apparently confused: the EAIF at one point referred groups seeking funding to the CBG program until I pointed out that CBG applications had been closed for months and did not appear likely to reopen soon.

CEA’s Mistakes page does not discuss other aspects of its group support work that also experienced “poor communication and missed deadlines”. For instance, evaluation of the CBG program was routinely delayed relative to timelines that were publicly communicated. Also, CEA’s efforts to deliver a Groups Platform were delayed numerous times and commitments to deliver updates on the status of that platform were not met.

 

Missed commitments around group platform

CEA’s repeated missed commitments and poor communication around delivering an online platform for groups interfered with other community builders’ efforts.

CEA’s 2017 review acknowledged that “the launch of the EA Groups platform has been delayed multiple times while we have been building the capacity in our Tech Team.” That post also discussed plans to roll the platform out in January 2018.

However, in late March 2018 the leader of that project posted that the project would be delayed at least another six weeks, and laid out the reasons and implications of that decision:

I (CEA) won't be developing the EA Groups Platform over the next six weeks. After 6 weeks it's likely (eg. 75%) that we'll resume working on this, but we'll reassess the priority of this relative to other EA group support projects at the time.

Currently the groups list is incomplete, and the majority of groups don't have management permissions for their groups, and are not notified of people signing up to their group mailing list via the platform. Because the half-finished state seems to be worse than nothing, I'll be taking the list down until work on this is resumed.

The primary reasons for delaying work on the platform are a) other EA group support projects (most notably the EA Community Grants process) taking priority and b) changing views of the value of the platform (I now think the platform as a stand-alone piece of infrastructure will be less valuable, than previously, and that a large part of the value will be having this integrated with other group support infrastructure such as funding applications, affiliation etc.).

I made a few mistakes in working on this project:

1) Consistently underestimating the time required to take the groups platform to a usable state.

2) Failing to communicate the progress and status of the project.

The combination of the above has meant that:

1) People signing up to groups members lists haven't been put into contact with the respective groups.

2) People have been consistently waiting for the platform's functionality, which hasn't been forthcoming. Plausibly this has also negatively interfered with LEANs efforts with managing the effective altruism hub.

I apologise for the above. I'm hesitant to promise better calibration on time estimates for completion of group support projects in future, I'll make sure to communicate with group leaders about the status of similar projects in future, so that if a decision is made to deprioritise a particular project, group leaders will know as soon as possible.

I'll post an overview of CEA's current EA group support priorities within the next two weeks.

 

No overview of CEA’s group priorities was published in the following two weeks (or anytime soon thereafter). To the best of my knowledge, CEA did not launch an online Groups platform until several years later.[29] It seems more than “plausible” that this “negatively interfered with LEANs efforts” as there is clear evidence that LEAN’s strategy assumed CEA would produce the platform.

 

Granting significantly less money than planned through CBG program

CEA’s grantmaking through the CBG program fell well short of plans.

CEA’s 2018 review announced 2019 plans for "a regranting budget of $3.7M for EA Grants and EA Community Building Grants.” While there was no specific budget provided for the CBG program, in January 2020 CEA acknowledged “we spent significantly less on EA Grants and CBGs in 2018 than the $3.7M announced in the 2018 fundraising post.” CEA later reported spending ~$875,000 on CBGs in 2019 (and just ~$200,000 on EA Grants).

 

Understaffing/Underestimating necessary work

CEA’s staffing of group work has not been commensurate with its goals, leading to missed commitments and problems for other community members.

Capacity problems include:

  • Lack of capacity was cited as the reason why CBG applications remained closed longer than expected in early 2021
  • “Consistently underestimating the time required to take the groups platform to a usable state” led to the Groups Platform being put on hold for years, even after its launch had already “been delayed multiple times while we have been building the capacity in our Tech Team.”
  • In 2017 CEA “began a beta version of an EA Conversations platform to facilitate conversations between EAs but discontinued work on it despite initial success, largely because of competing time demands.”
  • Underestimating the required work was among the reasons why a timeline for an impact evaluation of the CBG program was not been met
  • CEA’s Mistakes page notes “Many CBG recipients expected to receive more non-monetary support (e.g. coaching or professional development) than we were able to provide with our limited staff capacity. We think this is because we were too optimistic about our capacity in private communication with recipients.”
  • In spring of 2022, the leader of CEA’s groups work reflected “[Over the last year] I think I tried to provide services for too many different types of group leaders at the same time: focus university groups, early stage university groups, and city/national groups… This meant that I didn’t spend as much on-the-ground time at focus universities as I think was needed to develop excellent products… I didn’t generate enough slack for our team for experimentation. Demand for basic support services at focus universities more than tripled… This meant that our team was growing just to keep up with services that group leaders expected to receive from us, stretching our team to capacity. This left little time for reflection, experimentation, and pivoting.”

 

CEA’s capacity constraints have in many cases not been clearly communicated to the rest of the EA community, making it hard for others to make informed decisions.

 

Lack of transparent program evaluation for CBG program

CEA has not published an impact review of the CBG program, despite discussing plans and timelines to do so on multiple occasions.

In November 2018, CEA announced plans to “complete an impact review” for the CBG program “in the first half of 2019”. In response to a late-January 2019 question about when the evaluation would be conducted, CEA pushed this deadline back modestly, writing: “the impact evaluation will take place in the summer of 2019, likely around August.”

In May 2019, the November announcement was updated to note “We now expect that the impact review will not be completed in the first half of 2019”; I don’t believe this update was communicated elsewhere. A July 2019 post described an intention to “complete a deeper review of the programme’s progress by the end of this year.” In October 2019, a second response to the question posed in late January indicated that the evaluation remained incomplete and that publishing impact information had been deprioritized.[30]

The most relevant data I’ve seen for evaluating the CBG program came in CEA’s 2020 review (after nearly three years of operation); even with that data I find it hard to assess how well the CBG program is performing in an absolute sense and relative to other group support work CEA could prioritize instead. Other community builders doing group support work also appear uninformed about the CBG program’s impact.

 

Poor metric collection and program evaluation

CEA has invested significant amounts of time, money, and energy into group support but has published little in the way of actionable insights to inform other community builders.

Despite having extensively researched the publicly available information about CEA’s group support work, I find it very difficult to gauge the effectiveness of the work, and especially difficult to know which of CEA’s various programs have been most impactful.[31] I’d feel largely at a loss if I were allocating human or financial resources.

A recent comment from Rossa O'Keeffe-O'Donovan (echoing longstanding community concerns[32]) provides an excellent summary of the situation and is consistent with views I’ve heard from other community members:

It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago.

By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser?

 

CEA has been aware of these problems for a long time, having  acknowledged in 2017: “Our work on local groups was at times insufficiently focused. In some cases, we tried several approaches, but not long enough to properly assess whether they had succeeded or not.”

Five years later, CEA still struggles to design programs in a way that is conducive to evaluation. In April 2022, the leader of CEA’s group team reflected:

I think we tried to build services for group leaders that had long feedback loops (e.g. hiring for Campus Centres is a 6 month process, developing and designing metrics for groups involves at least a semester to see if the metric is helpful + lots of time communicating). We could have tested these services faster, conducted group leader interviews to shorten these feedback loops, and potentially even chosen to provide services that even had quicker feedback.

 

EA Grants (2017-2020)

 

Project background

EA Grants was a “successor” to EA Ventures. The program was launched in June 2017 with a goal of providing funding “to help individuals work on promising projects.” The initial grant round had a budget of £500,000 (~$650,000); unlike EAV this funding was secured ahead of time. The launch post indicated that “if we feel that we have been able to use money well through this project, we will allocate new funds to it in 2018.”

The initial grant round attracted 722 applicants, and ended up providing ~$480,000 to 21 grantees[33]. More details about that round can be found in this writeup, including a list of grantees.

CEA also ran a referral round in early 2018, and another application-based round starting in October 2018. The referral round distributed ~$850,000 and EA Grants distributed another $200,000 in 2019 and early 2020. Public information about where this money went is extremely limited; I’ve summarized the information I’ve seen here.

In November 2019, CEA published a report detailing some of the numerous problems the program had experienced, and noted that “we don’t think it’s likely that we’ll open a new round for EA Grants.” In April 2020, another post confirmed that “EA Grants is no longer considering new grantmaking.”

 

Problems

 

Chronic operational problems

Poor record keeping and organization led to missed commitments and bad experiences for grantees and applicants.

After joining CEA in December 2018 to run EA Grants, Nicole Ross published an update in November of 2019 describing serious and widespread operational shortcomings:

We did not maintain well-organized  records of individuals applying for grants, grant applications under evaluation, and records of approved or rejected applications. We sometimes verbally promised grants without full documentation in our system. As a result, it was difficult for us to keep track of outstanding commitments, and of which individuals were waiting to hear back from CEA…

A lack of appropriate operational infrastructure and processes resulted in some grant payments taking longer than expected. This lack of grantmaking operational systems, combined with the lack of consolidated records, led to delays of around a year between an individual being promised a grant and receiving their payment in at least one case.[34] We are aware of cases where this contributed to difficult financial or career situations for recipients.

The post did not discuss why these operational problems were not observed during the initial grant round, or why subsequent rounds were launched if they were observed.

 

Granting significantly less money than planned

EA Grants distributed much less money than intended, falling short of CEA’s grantmaking targets by millions of dollars.

In failing to grant as much as intended (an issue CEA never publicly discussed until I raised the issue), EA Grants bore an unfortunate resemblance to its predecessor EA Ventures. In December 2017, CEA announced plans to reopen EA Grants in 2018 with rolling applications and a significantly increased budget of £2,000,000 (~$2.6 million) or more. However, the 2018 referral round only granted ~$850,000.[35]

This problem extended into 2019, when CEA’s plans included a combined budget for EA Grants and the Community Building Grants program of $3.7 million. While a planned split between those programs was not given, this budget clearly implied a significant budget for EA Grants. Yet the program granted less than $200,000 in 2019 and early 2020 (while the CBG program granted ~$875,000 in 2019).

 

Repeated inaccurate communications about timelines and scope

Throughout 2018, the community was repeatedly, but incorrectly, led to believe that EA Grants would re-open soon, with rolling applications and very large grant capacity.

CEA’s original description of these mistakes significantly understated their severity (emphasis added):

In February 2018 we stated in a comment on the EA Forum that we planned to re-open the EA Grants program by the end of that month. Shortly afterwards, we realized that we had underestimated how much work would be involved in running the open round of Grants. We did not issue a public update to our timeline. We opened public applications for EA Grants in September 2018.

It’s true CEA offered an extremely over-optimistic timeline for EA Grants in February 2018; however, the assertion that this was the last such public update is demonstrably false. CEA provided at least four other public updates[36], which were generally overly optimistic about when applications would open and/or how much money would be granted. (After I pointed this out, CEA updated the copy on its Mistakes page.)

 

Unrealistic assumptions about staff capacity

CEA repeatedly made unrealistic commitments about EA Grants despite having minimal staff working on the project.

In December 2017, CEA announced plans to scale the EA Grants program significantly in 2018, with a budget of $2.6 million (up from ~$480,000 in 2017)[37] and a “plan to accept applications year-round with quick reviews and responses for urgent applications and quarterly reviews for less urgent applications.” Despite these ambitious goals, the project didn’t have anyone working on it full time until December 2018, and the search for this employee didn’t even begin until July 2018.

EA Grants did receive some part-time resources. But those seem to have gone to the referral round that started in January 2018, leaving even less capacity to work on the larger program that had been promised. Compounding capacity problems, CEA “elected to devote staff time to launching EA Community Building Grants and to launching our individual outreach retreats (such as our Operations Forum) instead of devoting that time to reopening EA Grants for public applications.”

Given the lack of dedicated capacity for EA Grants, that CEA supposedly identified this issue quickly, and that CEA was aware dedicated capacity was not expected soon, it is rather baffling that throughout 2018 CEA continued to issue optimistic timelines for re-opening the project. For example:

  • On February 11, CEA discussed plans to open the program by the end of the month with rolling applications; in actuality staff was working on the referral round that had started in January and the CBG program which would launch less than two weeks later.
  • In April CEA announced plans to reopen EA Grants by the end of June, but didn’t start looking for full time staff until July
  • Just one month before finally opening a round of EA Grants in September 2018 (albeit a round with a cap on the number of applications it could process and that granted less than $200,000), CEA was describing plans for “Re-launching EA Grant applications to the public with a £2,000,000 budget and a rolling application.”

At no time did CEA have adequate staff to execute these plans. The significant operational problems EA Grants exhibited suggest that there was not even sufficient capacity to properly execute the referral round, launched in January 2018 as a “stop-gap… so [CEA] could run the project with less staff capacity.”

Indeed, when Nicole Ross was finally hired to work on the project full-time, her diagnosis of the “issues of the program” pointed directly to capacity issues.[38]

 

Lack of transparency leading to faulty community assumptions

CEA’s faulty communications and missed commitments around EA Grants led other actors to operate on false assumptions.

Given CEA’s frequent communications that a large round of EA Grants was around the corner, it’s not surprising that many people in the EA community operated under that distorted assumption. References to a multimillion dollar EA Grants round can be found in lengthy (>100  comments) Facebook discussions about EA funding mechanisms and Less Wrong discussions about funding opportunities. New projects seeking to change the funding landscape seemed to operate under the assumption that EA Grants was a going concern (e.g. here and here). Other EAs simply expressed confusion about whether and how EA Grants was operating. There are also accounts of donors viewing projects not funded by EA Grants as not worth funding based on that signal, when in reality EA Grants was only open via referrals at the time.

Even Nick Beckstead[39] was unaware of what was going on with EA Grants. When stepping down from managing two EA Funds in spring of 2018, he encouraged “those seeking funding for small projects to seek support via EA Grants... EA Grants will have more time to vet such projects.” At the time, EA Grants was only open via the referral round, did not accept open applications for another five months, and was eight months away from hiring full-time dedicated staff.

The community’s confusion was presumably exacerbated because the EA Grants website was never updated after the first round of grants closed. Perhaps it was for the best that various unexecuted plans to reopen the program in 2018 were not listed on the site. However, it seems unambiguously problematic that even when applications finally re-opened in September 2018 the site was not updated so visitors saw a message saying applications were closed. Likewise, the site was never updated to reflect that CEA was unable to make grants for educational purposes, and therefore the site provided examples of “welcome applications” that included unfundable projects (even after a community member pointed this error out).

 

Lack of post-grant assessment

Despite EA Grants distributing roughly $1.5 million, there has been no public assessment of the efficacy of those grants.

When community members have inquired about evaluating these grants, CEA has replied that evaluation would happen in the future but that it was not clear which individual would be responsible for it. (Examples of these exchanges can be found here and here.) As a result, little is known about how many grants achieved their intended purpose.

In November 2019, CEA’s Nicole Ross flagged a “lack of post-grant assessment” as one of the program’s problems. She indicated that improvements had been made: “Since joining, I have developed a consistent process for evaluating grants upon completion and a process for periodically monitoring progress on grants. CEA is planning further improvements to this process next year.” She also committed to a specific deliverable: “I’m working on a writeup of the grants I’ve evaluated since I joined in December. Once I’ve finished the writeup, I will post it to the Forum and CEA’s blog, and link to it in this post.”

Unfortunately, that writeup was never published. Thus the closest we have to an evaluation is Ross’ extremely brief summary: “upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns.”

Since only grantees from the initial 2017 round have been published, it’s hard for community members to conduct their own evaluations. A quick look at the 2017 grantees seems consistent with Ross’ assessment of a “mixed track record.” Some grants, including the largest grant to LessWrong 2.0, clearly seem to have achieved their goals. But for other grants, like “The development of a rationality, agency, and EA-values education program for promising 12-14 year olds”, it’s not obvious that the goal was met.

The lack of grant assessment, or even public information about which grants were made, is especially disappointing given that an express goal of EA Grants was to produce “information value” that the rest of the community could learn from.[40] Ideally, the community could learn about the efficacy of different grants, as well as the process used to make the grants in the first place. The EA Grants process had potential weaknesses (e.g. “Many applicants had proposals for studies and charities [CEA] felt under-qualified to assess”) so understanding whether those weaknesses impacted the quality of grants seems particularly important.

Even descriptive (rather than evaluative) information about the grants has been scarce. Most grantees and grant sizes have not been shared publicly. And even when grants from the first round were published, CEA’s discussion of that grant round neglected to mention those grants were extraordinarily concentrated in certain cause areas.[41] 

 

 

Insufficient program evaluation

In public discussions of EA Grants’ mistakes, CEA has failed to notice and/or mention some of the program’s most severe problems.

CEA has occasionally discussed EA Grants’ problems. While these efforts provided some valuable lessons about the program (including some cited in this analysis), they missed some of the program’s biggest problems including:

 

 

Background

EA Funds is a platform that allows donors to delegate their giving decisions to experts. Funds operate in four cause areas: Global Health and Development, Animal Welfare, EA Infrastructure (EAIF), and the Long-Term Future (LTFF). EA Funds has raised roughly $50 million for those four funds since launching in 2017. EA Funds also facilitates tax-deductible giving for US and UK donors to various organizations.

Originally, the funds were managed by individuals, but in 2018 CEA adopted Fund Management Teams. In July 2020, Jonas Vollmer was hired to lead EA Funds, and in December 2020 CEA announced that EA Funds would “operate independently of CEA” though “CEA provides operational support and they are legally part of the same entity.”

 

Caveats

 

This section describes problems both before and after EA Funds’ 2020 spin off from CEA. While CEA bears responsibility for problems before the spin off, the new management team is responsible for subsequent problems (except to the extent that CEA was responsible for developing the spinoff plans).

It should also be noted that EA Funds is in the midst of some material changes. In May 2022, GWWC announced that “the donation specific functionality of funds.effectivealtruism.org will be retired and redirected to GWWC's version of the donation platform” and “EA Funds will continue to manage the grantmaking activities of their four Funds and will at some point post an update about their plans moving forward and this includes some of the reasoning for this restructure decision.” This restructuring will likely impact, and hopefully resolve, some of the ongoing problems I discuss below.

 

Problems

 

Failure to provide regular updates

EA Funds has struggled to provide updates on grant activity and other developments, and these problems are still continuing at time of writing despite CEA’s claims to have resolved them.

When EA Funds went live in February 2017, CEA announced plans “to send quarterly updates to all EA Funds donors detailing the total size of the fund and details of any grants made in the period. We will also publish grant reports on the EA Funds website.”

However, these plans weren’t executed. CEA’s Mistakes page acknowledges that “we have not always provided regular updates to donors informing them about how their donations have been used.” That page indicates that this problem only lasted through 2019, and was addressed because “We now send email updates to a fund’s donors each time that fund disburses grants.” That has not been my experience as a donor to EA Funds, as the emails I’ve received about where my donations have gone have been sporadic at best. And EA Funds’ “failure to provide regular updates” seems ongoing.

There was a period during which Fund management teams published detailed grant reports on the EA Forum and the webpages for each Fund; this practice added valuable transparency into the grant process.[42] But these reports have become much less frequent, and transparency around this has been problematic.[43] The Global Health and Development Fund is the only fund that has posted a grant report describing grants made in 2022 (a January grant).

Besides grant reports, other communication one would expect from a donation platform has also been absent. While EA Funds emailed out solicitations in late-December 2019 and late November 2020, no giving season email was sent in giving season 2021. EA Funds has also declined other opportunities to email donors, such as changes to the funds management team, the spin-off of EA Funds, or simply occasional solicitations. These practices could easily have suppressed donations. My understanding is that in light of the recent restructuring, these sorts of communications will be GWWC’s responsibility going forward.

 

Slow grant disbursement

The lack of grant activity during EA Funds’ early operations raised concerns from the community. In December 2017 (EA Funds’ first giving season), one EA asked if the platform was still operating. A month later, Henry Stanley wrote a post titled “EA Funds hands out money very infrequently - should we be worried?” which highlighted large pools of ungranted money and the infrequency of grants, then in April published another post with suggestions for improving EA Funds.

CEA staff responded to the April post, suggesting improvements would be forthcoming shortly:

Many of the things Henry points out seem valid, and we are working on addressing these and improving the Funds in a number ways. We are building a Funds ‘dashboard’ to show balances in near real time, looking into the best ways of not holding the balances in cash, and thinking about other ways to get more value out of the platform.

We expect to publish a post with more detail on our approach in the next couple of weeks. Feel free to reach out to me personally if you wish to discuss or provide input on the process.

 

However, as described in a July 2018 post titled “The EA Community and Long-Term Future Funds Lack Transparency and Accountability”, CEA never published this update. This post observed:

Whether it's been privately or on the Effective Altruism Forum, ranging from a couple weeks to a few months, estimates from the CEA regarding updates from the EA Funds cannot be relied upon. According to data publicly available on the EA Funds website, each of the Long-Term Future and EA Community Funds have made a single grant: ~$14k to the Berkeley Existential Risk Initiative, and ~$83k to EA Sweden, respectively. As of April 2018, over $1 million total is available for grants from the Long-Term Future Fund, and almost $600k from the EA Community Fund.

 

In October 2018, CEA announced new fund management teams and introduced a three-time-per-year granting schedule. These changes seem to have addressed the community’s concerns about slow grant disbursement, and CEA’s mistakes page indicates that this problem only existed from 2017-18.[44] From my correspondence with EA Funds staff my understanding is that funds are maintaining large but reasonable balances given their grantmaking activity; however, without up-to-date and reliable data on grantmaking and cash balances this is difficult to verify.

 

Failure to provide accurate data about fund financials

Various attempts to provide the public with information about the financial situation of EA Funds have been plagued by data quality issues.

In August 2018, after EA Funds was operating for a year and a half, CEA noted that one  of the “main things that community members have asked for” was “easy visibility of current Fund balances.” However, an attempt to remedy that in October did not work as planned meaning fund balances were often outdated from 2016-2019.

The dashboards EA Funds currently provides also contain bad data. The “payout amounts by fund” chart is obviously out of date, with multiple funds showing no payouts for over a year. I’m not clear on whether the fund balance dashboard on the same page is accurate, as some of the data seems suspicious, or at least inconsistent with the donations dashboard.[45] 

 

Operational problems

EA Funds exhibited a variety of operational problems, particularly prior to being spun out of CEA.

These missteps include:

  • “A bug in our message queue system meant that some payment instructions were processed twice. Due to poor timing (an audit, followed by a team retreat), the bug was not discovered for several days, leading to around 20 donors being charged for their donations twice.” (source)
  • “We failed to keep the EA Funds website up to date, meaning that many users were unsure how their money was being used.” (source)
  • “A delay in implementing some of the recurring payment processing logic in EA Funds meant that users who created recurring payments before May did not have their subscriptions processed.” (source)
  • Per a late 2019 update: “Most of the content on EA Funds (especially that describing individual Funds) hadn’t been substantially updated since its inception in early 2017. The structure was somewhat difficult to follow, and wasn’t particularly friendly to donors who were sympathetic to the aims of EA Funds, but had less familiarity with the effective altruism community (and the assumed knowledge that entails). At the beginning of December we conducted a major restructure/rewrite of the Funds pages...”
  • The original user experience on EA Funds contradicted normal fundraising practices and likely suppressed donations as a result. If someone (e.g. a new prospective donor coming from effectivealtruism.org) landed on the main page and clicked the prominent “donate now” button, they were asked to enter their email to create an account before learning about any of the funds or entering a donation amount.

 

Inadequate staffing

EA Funds has historically had very little staff capacity relative to the scope of the project and the plans communicated to the EA community and public at large.

While CEA’s Mistakes page does not mention understaffing as an issue for EA Funds, CEA has previously acknowledged this problem. In August 2018, Marek Duda wrote: “We are now dedicating the resources required to improve and build on the early success of the platform (though we recognize this has not been the case throughout the timeline of the project).”

In December 2017, it was unclear that EA Funds was even operational. Yet in early in 2018, CEA “made the choice to deprioritize active work on EA Funds in favour of devoting more staff resources to other projects (where active work includes technical work to improve the user experience, operations work to e.g. bring on new grantee organizations or to check in on a regular basis with Fund managers).” This decision to deprioritize EA Funds likely contributed to community dissatisfaction with how the funds were managed (particularly around grant frequency and transparency), which was voiced with increasing frequency starting in January 2018. Notably, when CEA laid out its plans for 2018 in December of the previous year, there was no mention of deprioritizing EA Funds in any way and instead plans were communicated for many substantive improvements to the platform.

The staff working on EA Funds has also turned over considerably, which has presumably exacerbated capacity issues. EA Funds was originally run by Kerry Vaughan when it launched in 2017. Marek Duda (2018) and Sam Deere (2019) then took over responsibility, followed by Jonas Vollmer who was hired in 2020 when EA Funds started operating independently of CEA. In June 2022, Caleb Parikh became the “Interim Project Lead.”

Not only did CEA dedicate minimal staff resources to EA Funds while it was in charge of the platform[46], Fund Managers have also been capacity constrained. When Nick Beckstead stepped down as manager of the EAIF and LTFF (prompted by community complaints about the infrequency of his grantmaking), he noted time constraints had contributed to this issue: “The original premise of me as a fund manager was… that this wouldn’t require a substantial additional time investment on my part.” Beckstead explained that for him “additional time would be better spent on grantmaking at Open Phil or other Open Phil work”, and that “I believe it will be more impactful and more satisfying to our community to have people managing the EA Community and Long-term Future Funds who can’t make most of their grants through other means, and for whom it would make more sense to devote more substantial attention to the management of these funds.” At least one other fund manager has also stepped down due to limited capacity.

The Fund Management Team model, announced in August 2018, has added Fund Manager capacity. But a lack of Manager capacity is still causing problems. In December 2020 one manager reported finding it problematic that “we currently lack the capacity to be more proactively engaged with our grantees.” And in April 2022, another manager explained that grant reports were significantly delayed “because the number of grants we're making has increased substantially so we're pretty limited on grantmaker capacity right now.”[47] 

 

Lack of post-grant assessment

EA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.

EA Funds has also failed to publish any descriptive analysis, such as meaningfully categorizing grants.

The infrastructure around grant data likely contributes to this lack of post-grant assessment. While Open Philanthropy provides a searchable database of grants it has made, it is difficult and time-consuming to collect data on EA Funds’ grant history. Records of grants are available, but each grant round for each fund is a separate document. Anyone wishing to analyze grants would need to hand enter data about each grant into a spreadsheet. I believe EA Funds plans to release a grant database in the future, which would significantly facilitate analysis of their grantmaking.

 

Overly positive descriptions of project

While responsible for EA Funds, CEA regularly portrayed the platform in an overly positive light in public communications. 

Some of these have been mentioned in other sections of this analysis, however, I feel like aggregating these examples provides a helpful perspective.

  • Mischaracterizing community feedback, like in an update CEA published a few months after launch, which originally stated “Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.” The post was later updated to remove this statement after receiving strong pushback from several EAs who observed that CEA had received substantial criticism in other areas (examples: here and here).
  • Regularly setting expectations of up-to-date grant reporting (e.g here and here) which has not materialized
  • Marketing EA Funds more aggressively than had been communicated to others, e.g. by quickly making EA Funds the main donation option on effectivealtruism.org and GWWC’s “recommendation for most donors”. As one EA put it “I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I'd been told earlier, privately and publicly - that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months.”
  • Changing the evaluation bar for money moved through EA Funds to frame this metric as a success. In August 2018, CEA described the project-to-date as “To a large extent… very successful. To date we have regranted (or are currently in the process of regranting) more than $5 million of EA-aligned money to around 50 organizations.” Yet in February 2017 when the project was launched, CEA stated “We will consider this project a success if… the amount we have raised for the funds in the first quarter exceeds $1M.” If CEA had raised its target of $1 million in the initial quarter of EA Funds and then experienced zero growth (which would presumably be very disappointing), by August 2018 that would have led to donations of $6 million or more than $7 million if one accounts for Giving Season.
  • Ignoring counterfactuals in discussions of money moved. There are many reasons to think that much of the money moved through EA Funds would still have been donated to effective charities if EA Funds didn’t exist. These reasons include: the “vast majority” of early donations coming from people already familiar with EA[48], EA Funds’ displacement of the GWWC Trust which was moving significant amounts of money and growing extremely quickly when it was shuttered[49], and EA Funds replacing other effective options on sites like GWWC and effectivealtruism.org.
  • Publicly stating plans for integrating a workplace giving option into EA Funds, which has not happened. “Automation of payroll giving” was mentioned in plans for 2018 and again in August of that year, but has not been implemented.

 

Community Health (2017-present)

 

Background

CEA has taken an active role in the health of the broader EA community in a variety of ways over the years. CEA’s Guiding Principles were published in 2017, and since early 2018 Julia Wise has served as a contact person that people could reach out to. The “community health” team was built out over the years, and includes 5 people at time of writing.

Historically, much of CEA’s community health work has been reactive in response to concerns raised by individuals or groups in areas such as interpersonal conflicts, online conflicts, advising on personal or mental health problems, improving diversity, equity, and inclusion, and community health practices. Since late 2021, CEA has been shifting toward more proactive work (e.g. writing content, anticipating future problems or risks, launching the EA Librarian project).

 

Caveats

Public information about CEA’s community health work is often unavailable, as many concerns are raised and addressed confidentially. It is also difficult to determine counterfactuals around what would have happened if CEA had not been active in this area. These factors make evaluating CEA’s community health work difficult, and I encourage readers to bear in mind that my analysis may have suffered as a result.

 

Problems

 

Deprioritizing mid-career (and older) EAs

Community building efforts focus on young EAs, leaving other age groups neglected.

At EA Global 2017, Julia Wise reported: 

“Two years ago I interviewed all the EA Global attendees over the age of 40. There were not many of them. I think age is an area where we’re definitely missing opportunities to get more experience and knowledge. One theme I heard from these folks was that they weren’t too keen on what they saw as a know-it-all attitude, especially from people who were actually a lot less experienced and knowledgeable than them in many ways.”[50]

I have not seen any evidence that older attendees' concerns were prioritized in subsequent conferences. And CEA’s other community building work has prioritized younger EAs in implicit ways (e.g. the original metric for evaluating the CBG program was the number of career changes a group produced) and explicit ways (e.g. CEA is not focusing on “Reaching new mid- or late-career professionals”).

The community health team’s “strategy is to identify ‘thin spots’ in the EA community, and to coordinate with others to direct additional resources to those areas.” But after CEA announced that reaching older EAs was not part of its strategy, I don’t believe any effort was made to have other community builders fill this gap.

 

Confidentiality Mistakes

The EA community’s contact person, whose responsibilities include fielding confidential requests, has accidentally broken confidentiality in two instances she is aware of.

 

Missed opportunity to distance (C)EA from Leverage Research

CEA has missed opportunities to distance itself and the EA community from Leverage Research and its sister organization Paradigm Academy, creating reputational risks.

Leverage Research ran the original EA Summits, so some connection to EA was inevitable. However, CEA had plenty of signs that minimizing that connection would be wise. CEA’s 2016 Pareto Fellowship, run by employees closely tied to Leverage, exhibited numerous problems including a very disturbing interview process. And Leverage has had minimal output (and even less transparency) despite investing significant financial and human capital.

Yet in 2018, CEA supported and participated in an EA Summit incubated by Paradigm Academy (a sister organization to Leverage) employees, including a leader of the Pareto Fellowship.[51] A former CEA CEO (who stepped down from that role less than a year earlier) personally provided some funding

After the Summit, CEA noted community concerns: “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community. We will address this in a separate post in the near future.” The post was later edited to note “We decided not to work on this post at this time.” CEA’s CEO at the time of the summit (Larissa Hesketh-Rowe) now works for Leverage (as does Kerry Vaughan, the former Executive Director of CEA USA).

Since then, there have been more negative revelations about Leverage.

An article in Splinter News was released [in September 2019], showing leaked emails where Jonah Bennett, a former Leverage employee who is now editor-in-chief for Palladium Magazine (LinkedIn ), was involved with a white nationalist email list, where he among other things made anti-Semetic jokes about a holocaust survivor, says he "always has illuminating conversations with Richard Spencer", and complained about someone being "pro-West before being pro-white/super far-right".

Geoff Anders, Leverage’s founder, defended Bennett, writing “I’m happy to count him as a friend.”

  • In October 2021, a former member of the Leverage ecosystem wrote a disturbing account of her experiences. In addition to revealing troubling personal experiences (“I experienced every single symptom on this list of Post-Cult After-Effects except for a handful (I did not experience paradoxical idealization of the leader, self-injury or sexual changes)” she described worrisome aspects of Leverage as an organization/community: “People (not everyone, but definitely a lot of us) genuinely thought we were going to take over the US government… One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”)... The main mechanism through which we’d save the world was that Geoff would come up with a fool-proof theory of every domain of reality.”

 

While CEA’s Mistakes page now includes content minimizing the relationship between CEA and Leverage, this content was not added until early 2022, well after the revelations above had surfaced. If CEA had released that content in 2018 (when it had originally planned to describe the CEA/Leverage relationship), it would be more credible that CEA knew of problems with Leverage and taken proactive steps to address them.

 

Poor public communication and missed commitments around EA Librarian project

The EA Librarian Project failed to meet public commitments.

The EA Librarian Project, launched in January 2022, was meant to answer questions about EA, especially “‘Dumb’ questions or questions that you would usually be embarrassed to ask.” The launch post noted “We will aim to publish a thread every 2 weeks with questions and answers that we thought were particularly interesting or useful (to either the community or the question asker). We hope that this will encourage more people to make use of the service.”

These regular updates were not provided. CEA published just a single update with several questions on March 10. On April 21 an EA posted a question about whether the program was still operating, as they had “submitted a question on March 31 and have not heard anything for 3 weeks now.” The person in charge of the EA Librarian project responded, noting that they had been ill and was unable to “indicate turnaround time right now due to having some of the librarians leave recently. We will certainly aim to answer all submitted questions but I expect that I will close the form this/next week, at least until I work out a more sustainable model.”

The EA Librarian Project never made any subsequent public updates, though people actively using the program were notified it was inactive.[52] The broadest notification that the project had shuttered came in a footnote of a general update from the Community Health Team reading “Since launching the EA Librarian Project, Caleb has become the Interim Project Lead for EA Funds. As a result, the EA Librarian service is no longer accepting new questions.”

This is a disappointing lack of communication regarding a project that was billed as an experiment (which others could presumably learn from if they had relevant information), generated interest at launch, received at least some positive feedback, and was well suited to address worrisome reports about group leaders not answering basic questions they are asked.

 

Lack of guidance on romantic relationships after problems with Pareto Fellowship

CEA did not offer guidance on power dynamics in romantic relationships for many years, despite evidence of problematic behavior.

In June 2022, Julia Wise published a post on “Power Dynamics Between People in EA.” While I think this post was excellent, I find it problematic that it was only recently published.[53] EA is a community where social, professional, domestic and romantic lives are often enmeshed, creating significant potential for inappropriate behavior. This concern is more than theoretical, as a staff member leading CEA’s 2016 Pareto Fellowship “appeared to plan a romantic relationship with a fellow during the program” in a situation with troubling power dynamics. That experience could have been a learning opportunity (and may well have been had CEA fulfilled its commitment to publishing an evaluation of the Pareto Fellowship), but instead was a missed opportunity.

Conclusion

As Santayana famously wrote, “Those who cannot remember the past are condemned to repeat it.” Throughout this report I’ve demonstrated ways in which this idea applies to EA community building efforts. Past problems have persisted when ignored and eased when they’ve been used to inform new strategies.

I hope this report helps the EA community understand its past better, and in doing so, also helps it build its future better.

 

 

  1. ^

     CEA’s concerns about public evaluations include the staff time required to execute them, and that many of the most important things include assessments of individuals or sensitive situations that would be inappropriate to share. Dedicated MEL staff would certainly help with the first concern. While I recognize that some information couldn’t be shared publicly, I still believe it would be valuable to share the information that could be.

  2. ^

     As one data point, in a recent hiring round, “literally zero of [CEA’s] product manager finalist candidates had ever had the title "product manager" before” and people with experience did not seem to find the job attractive. I’ve been told that CEA made adjustments and was able to find significantly more experienced candidates in a subsequent round; other organizations would presumably benefit from learning how CEA achieved this.

  3. ^

     To be more precise, I think it is valuable in communicating how CEA publicly describes its mistakes, but very bad in terms of giving readers an accurate description of those mistakes.

  4. ^

     “We found it hard to make decisions on first-round applications that looked potentially promising but were outside of our in-house expertise. Many applicants had proposals for studies and charities we felt under-qualified to assess. Most of those applicants we turned down.”

  5. ^

     “The referral system has the significant downside of making it less likely that we encounter projects from people outside of our networks. It also meant that some potentially promising applicants may have failed to develop projects that would have been good candidates for EA Grants funding, because they didn’t know that EA Grants funding was still available.”

  6. ^

     See, for example, attempts to measure GWWC attrition via data from the EA Survey but without the benefit of donations reported to GWWC by its members.

  7. ^

     CEA has written about this dynamic here.

  8. ^

     For example, better grantmaking data could have prevented, or sped the discovery of, EA Grants’ operational problems.

  9. ^

     The Forum’s community page has some of this data, but not in a way that lends itself to analysis. For instance, there is a map of groups, but that data can’t be exported, requiring manual counting and data entry to determine how many groups are in each country.

  10. ^

     CEA has also expressed more general concerns about crowding out other group support projects: “A final concern about CEA trying to cover the entire groups space is that we think this makes it seem like we “own” the space – a perception that might discourage others from taking experimental approaches. We think there’s some evidence that we crowded out others from experimenting in the focus uni space this year [2022].”

  11. ^
  12. ^

     One CEA supporter update described Q2 plans such as “sharing more of our thinking on [the] GWWC blog” and “improving the online signup process for GWWC members”; another described “promoting GWWC” among “the different activities we pursue” and discussed GWWC outreach as an active priority.

  13. ^

     The exact date of the report is unclear. It is cited in a fundraising report describing GWWC’s plans for 2015, suggesting it was written early that year or late in 2014. GWWC has also told me about “a 2014 analysis… done in 2016 because it is an analysis of 2014 pledges who donated in 2015”. This might have been an update of the original analysis; otherwise I’m not sure how to reconcile it with the impact report in the 2015 fundraising document. Regardless of the exact publication date, the report was conducted a long time ago, using donation data from even longer ago.

  14. ^

     As Will MacAskill has observed, simply including Sam Bankman-Fried’s impact would radically increase estimates of the value of a pledge.

  15. ^

     Comment: “I don’t recall seeing the ~70-80% number mentioned before in previous posts but I may have missed it. I’m curious to know what the numbers are for the other cause areas and to see the reasoning for each laid out transparently in a separate post. I think that CEA’s cause prioritisation is the closest thing the community has to an EA ‘parliament’ and for that process to have legitimacy it should be presented openly and be subject to critique.”

  16. ^

     At the time, the opening sentence read “CEA's overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.” I doubt many would read this sentence and assume CEA leadership thought existential risk should receive “70-80%” of resources when competing with other cause areas.

  17. ^

     I believe only two changes were made: In early 2021, Global Health and Development was added to the homepage’s reading list (after I observed that it was problematic to omit this highly popular and accessible cause). And the introductory definition of EA was slightly tweaked.

  18. ^

     “These are things like “you must have an application”, “we will give the intro talk, or at least have input into it”, and so on.) It was apparent to us well before the conference date that Roxanne/EAO was overburdened, and yet these constraints were created that made the burden even larger.”

  19. ^

     “Roxanne asked other EAO staff to help with the grant application, but they were not able to finish it either… After our trial assignment for EAGx, it sounded to us that Roxanne was on board but needed to make a final determination with the rest of the team. That took a week to come, which was hard for us since we already had a very compressed timeline.”

  20. ^

     “Regardless of everything else, there should have been someone at EAO who was checking in on Roxanne, especially since she is only working part-time.”

  21. ^

      There is some record of a lack of responsiveness to EAGx organizers in 2017.

  22. ^

     “I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways. Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.”

  23. ^

     E.g. here, here, and here

  24. ^

     “We merge expert judgment with statistical models of project success. We used our expertise and the expertise of our advisers to determine a set of variables that is likely to be positively correlated with project success. We then utilize a multi-criteria decision analysis framework which provides context-sensitive weightings to several predictive variables. Our framework adjusts the weighting of variables to fit the context of the projects and adjusts the importance of feedback from different evaluators to fit their expertise…. We evaluate three criteria and 21 sub criteria to determine an overall impact score.”

  25. ^

     “We received over 70 applications before our first official deadline which exceeded our expectations. The quality of the projects was also higher than I expected.”

  26. ^

     “This project has shown promise… We plan to devote additional person-hours to the project to improve our evaluation abilities and to ensure that we evaluate projects more swiftly than we do currently.”

  27. ^

     For example, this post from Charity Entrepreneurship lists possible funding from EAV as a reason why they believed “adequate support exists for this project in its earliest stages.”

  28. ^

     “In July 2020, we shared in a public post that we expected to open applications for our Community Building Grants program "around January 2021". We eventually decided to deprioritize this and push back the date. However, we didn't communicate any information about our timeline until March 2021. Several group leaders expressed their disappointment in our communication around this. While we believe we made the right decision in not reopening the program, we should have shared that decision with group leaders much earlier than we did.”

  29. ^

     The forum.effectivealtruism.org/community page appears to have been soft-launched (prior to being fully populated) in late 2021 and fully launched in roughly April 2022.

  30. ^

     “We've conducted initial parts of the programme evaluation, though haven't yet done this comprehensively, and we're not at the moment planning on publishing a public impact evaluation for EA Community Building Grants before the end of 2020. This is mainly because we've decided to prioritise other projects (fundraising, grant evaluation, developing programme strategy) above a public impact review. Also, we've found both doing the impact evaluation and communicating this externally to be larger projects than we previously thought. In retrospect, I think it was a mistake for me to expect that we'd be able to get this done by August.”

  31. ^

     While CEA has provided some relevant data (e.g. survey data) it usually comes without much backhistory for context, without guidance as to whether the data provided represents all the data that was collected or was cherry-picked, and without any control group to assess counterfactual impact.

  32. ^

     For example, in late 2016, a former CEA employee observed (emphasis added):

    "I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand. The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I'll speak to outreach.

    Big picture metrics: I would have expected one of CEA's very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/media, number of chapters, size of chapters, number that leave EA, etc. … So a few years in, I find it a bit mindblowing that I'm unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we'd be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization."

  33. ^

     CEA notes “we selected 22 candidates to fund” but the spreadsheet only lists 21 grantees.

  34. ^

     “Correction: We originally stated that grant recipients had experienced payment delays of “up to six months.” After posting this, we learned of one case where payment was delayed for around a year. It’s plausible that this occurred in other cases as well. We deeply apologize for this payment delay and the harm it caused.”

  35. ^

     For context, EA Grants’ 2018 shortfall of $1.75 million was the same amount granted by the LTFF and EAIF combined in that year.

  36. ^
  37. ^

     The $480,000 granted was ~¾ of the original grant budget. CEA described “ withholding the remainder… to further fund some of the current recipients, contingent on performance.” It is unclear whether any such regrants were made.

  38. ^

     “From June 2017 to December 2018 (when I joined CEA), grant management was a part-time responsibility of various staff members who also had other roles. As a result, the program did not get as much strategic and evaluative attention as it needed. Additionally, CEA did not appropriately anticipate the operational systems and capacity needed to run a grantmaking operation, and we did not have the full infrastructure and capacity in place to run the program.”

  39. ^

     As a CEA trustee and lead investigator on Open Philanthropy’s grant to CEA, Beckstead presumably had an unusually good window into CEA’s activity.

  40. ^

     For example: “we may sometimes choose to fund projects where we are unsure of the object-level value of the project, if we think the project will produce useful knowledge for the community” and “We believe that untested strategies could yield significant information value for the effective altruism community, and will fund projects accordingly.”

  41. ^

     CEA classified grants into four categories, but a post meant “to give people a better sense of what kinds of projects we look for, should we run EA Grants rounds in the future” did not provide subtotals for those categories and did not mention that the EA Community Building and Long Term Future categories received 65% and 33% of funds respectively, while Animal Welfare and Global Health and Development each got only 1%.  

  42. ^

    Unfortunately this transparency was only for those who sought out or stumbled upon the reports (as opposed to an email going to all donors).

  43. ^

     In April 2022, a question was posted on the EA Forum asking when/whether the EAIF and LTFF would publish grant reports. A representative of the EAIF responded that the fund “is going to publish its next batch of payout reports before the end of June” and a representative of the LTFF said they thought the fund “will publish a payout report for grants through ~December in the next few weeks” i.e. by mid-May. Both those deadlines passed without new reports being published. The EAIF published a payout report in mid-July (covering grants made between September and December 2021). The LTFF published a payout report in mid-August (covering grants paid out between August and December 2021).

  44. ^

     This page also lumped in slow “communication with grantees” along with “slow grant disbursement” though it provides no additional information on the former.

  45. ^

     For example, the fund balances for the Animal Welfare Fund seem quite high relative to donations to that fund as reported by another dashboard (which could also be wrong). Fund balances also look quite high for the EAIF and LTFF, but I’ve been told current balances are reasonable given large recent grantmaking. The donations dashboard does not show sufficient donations to accommodate such large grantmaking, possibly because that dashboard omits gifts from large institutional donors (alternatively, the donations dashboard could itself have bad data). If the fund balances and donation dashboards both report accurate data for the Animal Welfare Fund, that would suggest that this fund has probably not been distributing money as quickly as donors would like.

  46. ^

     After EA Funds was spun out of CEA, the platform appears to have more staff capacity.

  47. ^

     Given historical problems related to Fund Manager capacity, it seems worrisome that the LTFF page currently lists only two managers. 

  48. ^

     Per Kerry Vaughan: “My overall guess is that the vast majority of money donated so far has been from people who were already familiar with EA.” This strongly suggests that much of the money donated to EA Funds has mostly simply displaced other effective donations.  

  49. ^

     In its 2016 Annual Report, CEA noted that the Trust had received donations of £1.3 million Million in Q1-Q3 of 2016 and that “The amounts donated to the Trust have grown substantially”. In three quarters of 2016, not including giving season, the Trust raised more than in all of 2015 (£1.2 million) and approximately triple donations from all of 2014. The 2016 figure is particularly notable relative to the ~£2 million moved EA Funds in their first ~9.5 months because the Trust was primarily for UK donors only funded a select group of poverty charities.

  50. ^
  51. ^

    This decision was made by CEA leadership at the time, rather than the community health team specifically. I include it in this section because the decision had implications for community health.

  52. ^

     Users were emailed with notification that the project was behind, and at some point the “EA Librarian” tag on the EA Forum was changed to “EA Librarian (project inactive)”.

  53. ^

     CEA has had internal guidance on this topic for much longer, possibly introducing it after Pareto.

Comments68
Sorted by Click to highlight new comments since:

Thank you so much for the effort that you put into this review. Your work is incredibly thorough, and I think it’s a public service to hold key organizations accountable.

I also appreciate that you make clear that many of these issues occurred before 2019, and that CEA has been on a positive trajectory over the last few years.

We had a chance to fact-check this post before it was published, and I haven’t re-read it since then, so I don’t have any substantive comments to add on what happened.

Instead, I’ll focus on our actions and attitudes towards the suggestions you make. I generally won’t say too much about our future plans (we try to avoid making public commitments about our work, partly for reasons you explain in your post!).

CEA should hire dedicated Metrics, Evaluation, and Learning staff

This was already on a list of potential future hires. We may also shift one of our current staff (who could be a good fit for this area) into this sort of role (this move would be prompted by other factors, not this post in particular). 

However, overall I endorse that we focused on filling operational roles on our programs over this hire, because:

  • I feel like we have enough capacity within each team to reflect on that team’s work and make improvements: for example, the online team do a lot of data analysis and user interviews; and the events team do an in-depth analysis of EA Global survey responses after each event, making recommendations for what to change.
  • I feel like others (particularly OpenPhilanthropy) have provided broader-scope analysis that suggests that our programs are highly impactful.
  • I think that we’ve managed to make some exceptionally useful hires to our programs: e.g. events staff have helped us to ~quadruple our capacity at events this year. Given the last two points, I think that the marginal impact of these hires is greater than a Metrics/Evaluation/Learning hire.

CEA should prioritize sharing evaluations publicly

I think that you’re right that doing more of this would help others to learn from our experience, and allow others to more easily provide feedback on our work. These are benefits.

I still feel pretty unsure whether they outweigh the costs of producing public reports, especially because I think much of our work relies on data that it’s hard to communicate about publicly. I discuss a couple of specific examples below. But thanks for this feedback - we’ll bear it in mind for the future.

CEA should publish a post describing the process and benefits of its expansion and professionalization

Thanks for this suggestion and the implicit positive feedback behind it - I think that you’re right that this could be helpful for some people, and I might write more on our experiences here in the future (though I’m also aware that we’re just one case study, and we don’t have all the answers).

In the meantime, if people feel like they would benefit from hearing about CEA’s experience here, I would probably be happy to chat - feel free to message me on the Forum.

CEA should clearly and explicitly communicate its strategy

As you noted, I have a draft post on this which I may share at some point. For reference, our current strategy page is here.

CEA should publish what it has learned about group support work and invest in structured evaluation

As you acknowledge we have shared some of this publicly (and, for instance, Jack later edited his post to clarify that he had missed a recent public update where we share some reflections).

Again, sharing more information publicly would be useful, but I’m not convinced that sharing things publicly vs. more privately with people actively working in the space is the right call.

On quasi-experiments: my feeling is that overall these wouldn’t be cruxy for deciding whether this sort of work is worth doing at all (because I think that we have strong enough evidence, e.g. from OP’s survey, that this is likely the case). 

So then they’d be focused on finding the best way to achieve our goals. As background, we’re operating in quite a complex and judgement-driven domain (there aren’t nice outcome measures like there would be for studies of e.g. malaria). For this sort of setup, I think that we’re better off using a looser form of iteration, feedback, and case studies, and user interviews. (This is analogous to the pre-product-market-fit stage of a company where you’re not doing tonnes of A/B testing or profit maximization, but are instead trying to get a richer sense of what products would be useful via user interviews etc.) I think that experiments/quasi-experiments are much more useful for situations where there are clear outcome measures and the overall complexity of the environment is somewhat lower.

Additionally, we have contracted someone (contract confirmed, however they are yet to start) who will be centralising and sharing tools that will be useful - potentially including an intro training for new group organisers.

CEA should have a meaningful mistakes page, or no mistakes page

As you mentioned, we tried to flag that the page was not going to be complete.

We have updated our mistakes page to include many of the mistakes that you mention, particularly historic ones that we weren’t previously aware of. We also plan to link to this post from our mistakes page.

I expect that you will think that this new page misses or underplays some important mistakes, and indeed there were some issues that you list that we deliberately decided not to include.

On that page we say “We don’t list all the ways our projects were inefficient or suboptimal.” I think that we need to do this to limit the scope of the page and make it easier to decide what to include. I think that you may categorize some things as “mistakes” where I would say they’re “suboptimal”.

Re the Funds issues: We think that these issues were resolved around the time that this project spun out of CEA, and then were reintroduced when we no longer controlled the project. We therefore view them as out of scope of the mistakes page, and we have tried to make this clearer in the introduction text.

Overall, I believe that it’s better to have this page, which is at least relatively complete, than nothing at all. I’d be interested in others’ views here.

CEA should consider creating a public dashboard of its commitments to others

As mentioned to you, we maintained an internal dashboard of commitments in 2019, to address these issues. My impression (including from your report) is that these issues were mostly (but not totally) resolved by this and other processes.

It’s not currently a priority to maintain a public or private dashboard of such commitments, partly because we generally try to avoid making public commitments about our plans.

CEA should consider using targeted pilot programs

This is something that we have done recently, for instance with the UGAP program, the biosecurity advisors pilot or the EA Librarian project.

CEA should publish its internal evaluations of EA Grants

I think that sharing these internal evaluations would not be particularly informative for other grantmakers or the community: the program clearly had serious issues, and I don’t think that it was sufficiently close to optimal that other grantmakers would learn much from it. In any case, the board and key grantmakers were informed of the status of the program and had the chance to ask more questions. Grantmakers have since been conducting better-run experiments of this sort (e.g. EA Funds and FTX’s recent experiments).

Writing this up for the public would take valuable staff time, so I don’t think that this is worth the cost. The basic version would be easier to produce but seems even lower value, particularly given that the program has disbursed nothing in the last 3 years or so.

The EA community should seriously engage with governance questions

This is in the “EA community” section, but seems to be mostly directed at CEA, so I'll respond to it too.

CEA currently receives oversight and scrutiny from its board and its funders. They regularly (~weekly) share critical feedback with us, and annually appraise my performance. I’ve also recently been spending more time talking to community members (from different sections of the community) to ask for their input and feedback on our work. We have recently asked for (anonymous) feedback from community members. Additionally, we regularly receive public feedback (positive and negative) via surveys and public Forum posts/comments. I don’t think that this is a setup with few opportunities for giving feedback, or little accountability. 

I think it’s pretty unclear whether there should be more, fewer or different people providing that accountability. I think that the current balance is OK all things considered. That being said, as we go through the operations spinoff and since the organization is growing the board is planning to think more about the right governance setup for CEA (and other organizations in the legal entity), and they may make some changes here. I expect these conversations to be mostly between CEA staff and the board, though they will probably consult other community members too.

Implicitly, you’re suggesting that the appropriate place for discussion of CEA’s governance is in public. In contrast, I think CEA’s board holds that responsibility. Overall, I think that discussion is going to be more productive with a few highly-skilled and high-context individuals rather than a broad discussion. While I agree that getting broad community input into our work is important, I also think that it’s critical that we are held accountable for doing impactful work, which will not always be the same as what pleases community members.
 

Re: CEA should consider creating a public dashboard of its commitments to others

It’s not currently a priority to maintain a public or private dashboard of such commitments, partly because we generally try to avoid making public commitments about our plans.

As I demonstrated several places in my analysis, the main problem with CEA missing public commitments is that it makes it difficult for other community members to make good plans. CEA avoiding making public commitments doesn’t really solve this problem, and could make it worse. Similarly, it doesn’t help much if CEA says “we hope to do X by Y date, but don’t consider this a commitment” because people are still likely to use that info in their plans for lack of a better option.

A far better outcome would be for CEA to make more accurate public commitments (by adding conservatism and/or providing wide ranges around dates/deliverables to incorporate uncertainty) and then providing timely updates when not on track to meet those commitments. CEA is too important an organization for other EAs not to be able to plan around.

I personally don't think we can expect orgs to "make accurate predictions", it's just too hard.

I'd instead aim to have the org share their best guess often, including "here is our uncertainty" (not as a number, but as something they know better, like "we don't know if our previous product, X, will be adopted quickly or need a lot of changes").

Or some other method that as a manager you'd want to use with an employee.

 

+1 to "not giving any estimates doesn't solve the problem",  just like it wouldn't if you'd be a manager and your employee would stop giving estimates

Maybe I was a bit casual saying that "we try not to announce plans publicly". 

We've definitely updated in this direction since 2019, but I think that our current communications probably allow people to coordinate relatively well with us.

Let's look program-by-program:

  • We plan and announce events well ahead of time, at the point where we confirm venues (arguably we could give even more notice, this is something that we're working on).
  • The online team plans major goals on a monthly cycle and then does weekly sprints towards those goals, so there would be at most a 1 month delay between a particular plan being made and it being public.
  • The groups team is mostly doing repeatable work (basic groups funding, monthly VP rounds, etc). We iteratively make small improvements to those programs, so again there shouldn't be big gaps between changes being planned and being public.
  • In terms of less "routine" groups work:
    • For UGAP, as with events, we announce rounds ahead of time.
    • The CBG program has mostly been operating via hiring rounds recently, which again are announced/publicised to the appropriate people once we have firm plans. We work with local stakeholders on these rounds.
  • The Community health team does some "routine" work, which we maintain and iteratively improve (for instance our work on interpersonal harm). For non-routine work that we can discuss publicly, I think that we've tended to also announce it publicly.

If we were to stop or majorly deviate from routine work, we'd let people know about that.

So I think that when you look at the program-by-program specifics, I think that people would at least know about our plans shortly after we've made them. I think that the key thing that we've stopped doing is to commit to timescales for specific improvements to our programs, but I don't think that this is likely to be causing significant negative externalities (let me know if it is).

I also want to say that if people are worried about stepping on our toes:

  • Competition can be good! Feel free to just go ahead, and if we end up trying a similar project, then may the best one (or both) succeed.
  • Please feel free to reach out to us (see my profile for various ways you can do this) to ask whether we have any half-implemented work here, and to see if we can share any advice. (Again, ignoring staff time, ideally people wouldn't have to ask, but I hope that this setup is reasonably accessible and more time efficient.)

I think that our current communications probably allow people to coordinate relatively well with us.

Yeah, I think you’re generally doing an improved job in this area and that people can currently coordinate fairly well, particularly around the “routine” work you describe. I guess I see part of the benefit of a public dashboard as making sure that routine commitments continue to be met (e.g. timely announcement of event dates and timely delivery of grant money). I’d also expect it to be helpful for monitoring how things are going with new projects that come up (the EA Librarian is a relatively recent example of a new project where commitments weren’t met, albeit one with pretty minor knock-on implications for the community).

I also want to say that if people are worried about stepping on our toes:

  • Competition can be good! Feel free to just go ahead, and if we end up trying a similar project, then may the best one (or both) succeed.
  • Please feel free to reach out to us (see my profile for various ways you can do this) to ask whether we have any half-implemented work here, and to see if we can share any advice. (Again, ignoring staff time, ideally people wouldn't have to ask, but I hope that this setup is reasonably accessible and more time efficient.)

I think it’s great you’re open to people reaching out (though I’m somewhat concerned people will be reluctant to for fear of wasting your time). I also think it was a very positive step for CEA to publish a list of areas where you’re not focusing. 

However, I get the sense (especially from your first bullet point) that you’re significantly underestimating how much people will want to avoid competing with CEA. It’s a huge hurdle to compete against a better funded, better connected, and better known organization. I’d guess that if someone inquired about CEA’s plans in an area and were told “we’re not currently working on that but might want to do something in a couple of years” that would still constitute a major deterrent. 

I also think there’s an important historical context here, which Peter Wildeford described in late 2019:

I think CEA has frequently tried to "acquire" core activities from other organizations, sometimes using fairly overt pressure. In many cases this has turned out well, but in many cases this has pushed out another group that may have done a good job only for the newly acquired activity to end up "under delivered" by CEA. 

While CEA has improved in a lot of areas since 2019, I’m not sure how much progress has been made in this area (which, quite understandably, people are generally reluctant to discuss publicly). I can think of at least one post-2019 instance where, while not exactly matching the pattern Peter describes, I think CEA did much more gate-keeping of an area than was warranted. 

Oh I should have said, I'm on holiday for the next week, so I won't be responding to replies in these threads for that period, hope that's ok!

No problem, have a great holiday :)

Coming back to this, I'm not sure that I have tonnes to add here: I think you're right that saying that would probably deter people.  I think generally in such cases we'd drop the second clause (just say "we're not currently working on that", without the "but we might in the future"), to decrease this effect.

I am also aware of some post-2019 instances where we put off people from working in an area. I think that this was mostly inadvertent, but still a significant mistake.  If you're open to DMing me about the instance you're thinking of, I'd be interested in that. One of our core values is alliance mentality - we want to work with others to improve the world rather than trying to grab territory.  So I think we're trying to do this well. If we're ever deterring people from doing work, I'm keen to hear this (including anonymously), and I'll try to make sure that we get out of the way as much as possible.

I strongly encourage people to compete with CEA and ask us about our plans.

Re: Governance…

As we go through the operations spinoff and since the organization is growing the board is planning to think more about the right governance setup for CEA (and other organizations in the legal entity), and they may make some changes here. I expect these conversations to be mostly between CEA staff and the board, though they will probably consult other community members too.

Glad to hear these conversations are taking place. Even if most of the conversations take place between CEA and the board, I think there’d be value in publicly soliciting thoughts on the matter (even if that didn’t involve a deep public discussion); people outside your direct networks may have some good ideas. FWIW, I’m deeply skeptical that a board anywhere near the size and composition of CEA’s current board could provide sufficient oversight for the numerous organizations that will be part of the larger legal entity. To the extent some or all of those organizations minimize the use of public program evaluations, that raises my skepticism considerably, as that model requires much more board time and attention.

Implicitly, you’re suggesting that the appropriate place for discussion of CEA’s governance is in public. In contrast, I think CEA’s board holds that responsibility. Overall, I think that discussion is going to be more productive with a few highly-skilled and high-context individuals rather than a broad discussion. While I agree that getting broad community input into our work is important, I also think that it’s critical that we are held accountable for doing impactful work, which will not always be the same as what pleases community members.
 

To clarify my position, I think there should be public meta-level discussion about CEA’s governance, at least as it relates to work CEA is doing on behalf of the community. My sense is there’s very little clarity about a) what are the areas where CEA is managing community resources? (e.g. as I raised in our private correspondence, the @effect_altruism twitter account seems to have been framed as a community account but operated more like CEA’s) b)  what are CEA’s responsibilities in those areas? c) what accountability mechanisms should be in place for those areas? 

Once there is some clarity on those issues (which I think requires some public discussion, though CEA laying out a proposal would be a reasonable way to kick that off), object-level governance discussions needn’t involve much public discussion. One plausible model (which I’m not endorsing as the “right” governance model) would be to have the community elect a sort of ombudsperson who would serve on CEA’s board with a role of monitoring (and reporting on?) CEA’s responsibilities to the community. In that model, CEA would still be mainly accountable to the board (and could have more efficient private discussions with the board), but there would be a better mechanism for ensuring accountability to the community.

I also want to point out that in areas where CEA’s board has different beliefs than the broader community, the board is a poor accountability mechanism for ensuring that CEA manages community resources in a way that reflects community values. To state an obvious example, CEA’s board favors longtermism more than the community at large. CEA has inappropriately favored longtermism in community resources for many years (and this problem is ongoing). I struggle to see why relying on accountability to the board would be expected to resolve that. 

Thanks! I think that a lot of this is an area for the board more than for me (I'll flag this thread to them for input, but obviously they might not reply). I and the board are tracking how we can best scale governance (and aware that it might be hard to do this just with the current board), and we've also considered the ombudsman model (and not yet rejected it, though I think that many versions of it might not really change things too much - I think the board do care about CEA following through on its responsibilites to the community).

Re the EA twitter account: CEA does operate that account, and I think that we inappropriately used it for sharing CEA job ads. We're changing this now. Thanks for pointing it out. I think that we run some other EA social media accounts, but I'm not aware of any other projects that we do where it's not clear that CEA runs them.

I think the board do care about CEA following through on its responsibilites to the community

I’m glad this is something the board cares about. That said, I think the board will have difficulty keeping CEA accountable for those responsibilities without 1) a specific board member being explicitly assigned this and 2) an explicit list of what those responsibilities so that CEA, its board, and the community all have the same understanding (and so non-obvious things, like the Twitter account, don’t get missed).

Related to CEA’s board: does CEA have any policies around term-limits for board members? This is a fairly common practice for nonprofits and I’m curious about how CEA thinks about the pros and cons.

On 1), there is a specific board member assigned to assessing CEA's performance (which would include this). I agree that 2) is somewhat missing.

I'm not aware of a policy on term limits for the Effective Ventures board, and can't speak for them. 

Re: 1, can you share which board member is responsible for this?

Re: 2, is this something CEA plans to work on in say the next 3 months? If not, would it help if a volunteer did an initial draft?

  1. Sure, it's currently Claire Zabel, but it was Nick Beckstead until July.
  2. We don't plan to do this in the next 3 months. If a volunteer did a good initial draft, I think there's an 80% chance that we use that in some way.

I think the board do care about CEA following through on its responsibilites to the community

I hope that's true, but there are at least two problems with that:

  1. It's impossible for the community to verify
  2. It can very easily change as:
  • Board members leave and new ones join
  • Board members' opinions on this change
  • Most importantly, the community itself changes in ways not reflected by the board

As far as I can see, only democratic mechanisms guarantee accountability that stays stable over time.

My sense is that the board is likely to remain fairly stable, and fairly consistently interested in this. 

I also don't really see why democracy is better on the front of "checking that an org consistently follows through on what it says it's going to do": all of your arguments about board members would also seem like they could apply to any electorate. There might be other benefts of a democracy, of course (though I personally think that community democracy would be the wrong governance structure for CEA, for reasons stated elsewhere).

I'm not sure I follow.

My sense is that the board is likely to remain fairly stable, and fairly consistently interested in this.

Would you trust a governing body on the basis of someone you don't even personally know saying that their sense is that it's alright?

all of your arguments about board members would also seem like they could apply to any electorate.

Only for a limited time period - elected officials have to stand for re-election, and separation and balance of powers help keep them in check in the meantime. Changes in the community are also reflected by new elections.

I personally think that community democracy would be the wrong governance structure for CEA, for reasons stated elsewhere

Could you please point to that 'elsewhere'? I don't think I've encountered your views on the matter.

Would you trust a governing body on the basis of someone you don't even personally know saying that their sense is that it's alright?
 

 Probably not - I understand if this doesn't update you much. I would suggest that you look at public records on what our board members do/have done, and see if you think that suggests that they would hold us accountable for this sort of thing. I admit that's a costly thing to do. I would also suggest that you look at what CEA has done, especially during the most recent (most relevant) periods - this post highlights most of our key mistakes, and this sequence might give you a sense of positive things we achieved. You could also look at comments/posts I've written in order to get a sense of whether you can trust me. 

I hope that helps a bit!

Only for a limited time period - elected officials have to stand for re-election, and separation and balance of powers help keep them in check in the meantime. Changes in the community are also reflected by new elections.

My point is that the electorate  (not the elected representatives) can leave/new people can join the community. Also their opinions can change. So I don't think it's a very robust mechanism for the specific thing of making sure an organization follows through on things it said it would do. I think you're right that your third point does apply though.

Could you please point to that 'elsewhere'? I don't think I've encountered your views on the matter.

I don't literally argue for that position, but I think that the last section of this comment touches on my views.


 

Ok, I now get what you mean about the electorate. But I think (it's been some time) my point was about responsibilities to the community rather than on following through.

Regarding the last point, I'm a bit confused because in parallel to this thread we're discussing another one where I quoted this specific bit exactly, and you replied that it's not about who should govern CEA, but one meta-level up from that (who decides on the governance structure).

Ah cool, yeah agree that democracy is pretty strongly designed around responsibilities to the community, so it's probably better than an unelected board on that dimension.

The final paragraph in the comment I just linked to is about one-meta-level-up. The penultimate and antipenultimate paragraphs are just about the ideal governance structure. Sorry, that's maybe a bit unclear.

Re: CEA should prioritize sharing evaluations publicly

I think that you’re right that doing more of this would help others to learn from our experience, and allow others to more easily provide feedback on our work. These are benefits.

I still feel pretty unsure whether they outweigh the costs of producing public reports, especially because I think much of our work relies on data that it’s hard to communicate about publicly. I discuss a couple of specific examples below. But thanks for this feedback - we’ll bear it in mind for the future.

To clarify, I think CEA itself would also learn a lot from this practice. I’ve raised a number of points that CEA was unaware of, including areas where CEA had attempted to examine the program and including occasions under current management. If one person using public data can produce helpful information, I’d expect the EA hive mind with access to data that’s currently private to produce many more valuable lessons.

I’d also like to emphasize that one big reason I think the benefits of public evaluations are worth the cost is for the signal they send to both outside parties and other EA organizations. As I wrote:

If CEA deprioritizes public evaluations, this behavior could become embedded in EA culture. That would remove valuable feedback loops from the community and raise concerns of hypocrisy since EAs encourage evaluations of other nonprofits.

I’m curious if you have a ballpark estimate of what percentage of EA organizations should publish evaluations. Some of the objections to public evaluations you raise are relevant to most EA orgs, some are specific to CEA, and I’d like to get a better sense of how you think  this should play out community-wide.

Thanks - I think you're right that the EA hive mind would also find some interesting things!

Re the % that should produce public evaluations: I feel pretty unsure. I think it's important that organizations that are 1) trying to demonstrate with a lot of rigor that they're extremely cost-effective, and 2) asking for lots of public donations should probably do public evaluations. Maybe my best guess is that most other orgs shouldn't do this, but should have other governance and feedback mechanisms? And then maybe the first type of organizations are like 20% of total EA orgs, and ~50% of current donations (numbers totally made up).

Thanks for sharing your thinking on this. 

FWIW, I think about this quite differently. My mental model is more along the lines of “EAs should hold EA charities to the same or higher standards of public evaluation (in terms of frequency and quality) as comparable (in terms of size and type of work) charities outside of EA.” I think the effective altruism homepage does a pretty good job of encapsulating those standards (“We should evaluate the work that charities do, and value transparency and good evidence”). The fact that this statement links to GiveWell (along with lots of other EA discourse) implies that we generally think that evaluation should be public. 

Re: CEA should publish what it has learned about group support work and invest in structured evaluation

On quasi-experiments: my feeling is that overall these wouldn’t be cruxy for deciding whether this sort of work is worth doing at all (because I think that we have strong enough evidence, e.g. from OP’s survey, that this is likely the case). 

I think it’s fair to say OP’s survey indicates that groups are valuable (at least for longtermism, which is where the survey focused). I think it provides very little information as to why some groups are more valuable than others (groups at top universities seem particularly valuable, but we don’t know if that’s because of their prestige, the age of the groups,  paid organizers, or other factors) or which programs from CEA (or others) have the biggest (or smallest) impact on group success. So even if we assume that groups are valuable, and that CEA does group support work, I don’t think those assumptions imply that CEA’s group support work is valuable. My best guess is that CEA’s group support is valuable, but that we don’t know much about which work (e.g. paid organizers vs. online resources) has the most impact on the outcomes we care about. I find it quite plausible that some of the work could actually be counterproductive (e.g. this discussion). 

Greater (and more rigorous) experimentation would help sort these details out, especially if it were built into new programs at the outset.

For this sort of setup, I think that we’re better off using a looser form of iteration, feedback, and case studies, and user interviews. (This is analogous to the pre-product-market-fit stage of a company where you’re not doing tonnes of A/B testing or profit maximization, but are instead trying to get a richer sense of what products would be useful via user interviews etc.) I think that experiments/quasi-experiments are much more useful for situations where there are clear outcome measures and the overall complexity of the environment is somewhat lower.

I feel like this has been going on for many years, without a lot of concrete lessons to show for it. Years ago, and also more recently, CEA has discussed feedback loops being too long to learn much, and capacity being too tight to experiment as much as desired. 

I agree that we care about multiple outcomes and that this adds some complexity. But we can still do our best to measure those different outcomes and go from there. Six years (more if you count early GWWC groups or EA Build) into CEA's group support work, we should be well beyond the point of trying to establish product-market-fit.

This comment from Peter Wildeford's recently published criticisms of EA seems relevant to this topic:

EA movement building needs more measurement. I'm not privy to all the details of how EA movement building works but it comes across to me as more of a "spray and pray" strategy than I'd like. While we have done some work I think we've still really underinvested in market research to test how our movement appeals to the public before running the movement out into the wild big-time. I also think we should do more to track how our current outreach efforts are working, measuring conversion rates, etc. It's weird that EA has a reputation of being so evidence-based but doesn't really take much of an evidence-based orientation to its own growth as far as I can tell.

Also worth noting: Peter is a manager of the EAIF, the main funding option for national/city based groups. Max has mentioned that one of the reasons why he thinks public and/or (quasi) experimental evaluation of group work is relatively low priority is because CEA is already sharing information with other funders and key stakeholders (including, I assume, EAIF). Peter’s comment suggests that he doesn’t view whatever information he’s received as constituting a firm base of evidence to guide future decision making.

Max’s comments from our private correspondence (which he’s given me permission to share):

 

I think that we've shared this [i.e. learnings re: group support] with people who are actively trying to do similar things, and we're happy to continue to do this. I'm not sure I see doing a full public writeup being competitive with other things we could focus on… it's not that we have a great writeup that we could share but are hoarding: it would take a lot of staff time to communicate it all publicly, and we also couldn't say some of the most important things. It's easier to have conversations with people who are interested (where you can focus on the most relevant bits, say things that are hard to say publicly).”

Hey, thanks for this. I work on CEA's groups team. When you say "we don’t know much about which work ... has the most impact on the outcomes we care about" - I think I would rather say

a) We have a reasonable, yet incomplete, view on how many people different groups cause to engage in EA, and some measure on what is the depth of that engagement

b) We are unsure  how many of those people would have become engaged in EA anyway

c) We do not have a good mapping from "people engaging with EA" to the things that we actually want in the world

I think we should be sharing more of the data we have on what types of community building have, so far, seemed to generate more engagement.  To this end we have  a contractor who will be providing a centralized service for some community building tasks, to help spread what is working. I also think groups that seem to be performing well should be running experiments where other groups adopt their model. I have proposed this to several groups, and will continue to do so.

However trying to predict the mapping from engagement to good things happening in the world is (a) sufficiently difficult that I don't think anyone can do it reliably (b) deeply unpleasant to a lot of communities. In trying to measure this we could decrease the amount of good that is happening in the world - and also probably wouldn't succeed in taking the measurement accurately.

Thanks Rob, this is helpful!

I think we should be sharing more of the data we have on what types of community building have, so far, seemed to generate more engagement.  To this end we have  a contractor who will be providing a centralized service for some community building tasks, to help spread what is working.

I’d love to see more sharing of data and what types of community building seem most effective. But I guess I’m confused as to how you’re assessing the latter. To what extent does this assessment incorporate control groups, even if imperfect (e.g. by comparing the number of engaged EAs a group generates before and after getting a paid organizer, or by comparing the trajectory of  EAs generated by groups with paid organizers to that of groups without them?)

 

trying to predict the mapping from engagement to good things happening in the world is (a) sufficiently difficult that I don't think anyone can do it reliably (b) deeply unpleasant to a lot of communities.

Yes, totally agree that trying to map from engagement to final outcomes is overkill. Thanks for clarifying this point. FWIW, the difficulty issue is the key factor for me, I was surprised by your “unpleasant to a lot of communities” comment. By that, are you referring to the dynamic where if you have to place value on outcomes, some people/orgs will be disappointed with the value you place on their work?

 

I also think groups that seem to be performing well should be running experiments where other groups adopt their model. I have proposed this to several groups, and will continue to do so.

This seems like another area where control groups would be helpful in making the exercise an actual experiment. Seems like a fairly easy place to introduce at least some randomization into, i.e. designate a pool of groups that could potentially benefit from adopting another group’s practices, and randomly select which of those groups actually do so. Presumably there would be some selection biases since some groups in the “adopt another group’s model” condition may decline to do so, but still potentially a step forward in measuring causality.

I was surprised by your “unpleasant to a lot of communities” comment. By that, are you referring to the dynamic where if you have to place value on outcomes, some people/orgs will be disappointed with the value you place on their work?

 

Not really. I was more referring that any attempt to quantify the likely impact someone will have is (a) inaccurate (b) likely to create some sort of hierarchy and unhealthy  community dynamics.

 

This seems like another area where control groups would be helpful in making the exercise an actual experiment. Seems like a fairly easy place to introduce at least some randomization into

I agree with this, I like the idea of successful groups joining existing mentorship programs such that there is a natural control group of "average of all the other mentors." (There are many ways this experiment would be imperfect, as I'm sure you can imagine) - I think the main implementation challenge here so far has been "getting groups to actually want to do this." We are very careful to preserve the groups' autonomy, I think this acts as a check on  our behaviour. If groups engage on programs with us voluntarily, and we don't make that engagement a condition of funding, it demonstrates that our programs are  at least delivering value in the eyes of the organizers. If we started trying to claim more autonomy and started designating groups into experiments, we'd lose one of our few feedback measures. On balance I think I would prefer to have the feedback mechanism rather than the experiment. (The previous paragraph does contain some simplifications, it would certainly be possible to find examples of where we haven't optimised purely for group autonomy)

Thanks for clarifying these points Rob. Agree that group autonomy is an important feedback loop, and that this feedback is more important than the experiment I suggested. But to the extent its possible to do experimentation on a voluntary basis, I do think that’d be valuable.

I agree with this statement entirely.

 

Go team!

Sometime in the last few days, CEA has updated its Mistakes page to address a number of concerns raised in my analysis. I think it is great CEA did this, and consider this part of the constructive interaction I’ve had with CEA in the leadup to publishing this report. However, these changes create some instances where I describe an issue as missing from the Mistakes page, but it is now present. For reference, here is a version of the Mistakes page as it looked during the creation of my report. 

Ah sorry for this! 

No need to apologize, I'd much rather have the more accurate information posted on your page!

Very quick note:

I agree that more public evaluations of things like CEA programs and programs that they fund, would be really valuable. I'm sure people at CEA would agree too. In my experience at QURI, funders are pretty positive about this sort of work.

One of the biggest challenges is in finding strong people to do it. Generally the people qualified to do strong evaluation work are also qualified to do grant funding directly, so just go and do that. It's hard to do evaluation well, and public writeups present a bunch of extra challenges, many of which aren't very fun.

If people here have thoughts on how we can scale public evaluation, I'd be very curious. 

I agree that more public evaluations of things like CEA programs and programs that they fund, would be really valuable. I'm sure people at CEA would agree too.

I don’t think CEA staff, or at least CEA leadership, actually agrees that public evaluations are “really valuable”. CEA has repeatedly deprioritized public evaluations, even after public commitments to conduct them (e.g. CBG, EA Grants, Pareto Fellowship). And Max has been pretty clear that he prioritizes accountability to board members and funders for CEA rather than to the public or EA community, and that he thinks public evaluations generally aren’t worth the cost to produce them (especially because CEA is hesitant to publicly criticize people/orgs and that is often where the most the most useful information can be gleaned.) So my sense is that CEA thinks public evaluations would be valuable in some abstract sense where there aren’t any costs to conducting them, but not in a practical sense that incorporates the tradeoffs that always exist  in the real world.

We might be quibbling a bit over what "really valuable" means. I agree that CEA definitely could have prioritized these higher, and likely would have if they cared about it much more. 

I think they would be happy to have evaluations done if they were very inexpensive or free, for whatever that's worth. This is much better than with many orgs, who would try to oppose evaluations even if they are free; but perhaps it is suboptimal. 

I think only doing something if it's free/inexpensive is almost the opposite of thinking something is 'really valuable', so that's far from a quibble (almost the opposite because, as you point out, actively being against something is the actual opposite). 

Ozzie, I think you and I agree on CEA’s stance about public evaluations (not actively opposed to them, but mainly interested only if they are free or very inexpensive to execute). My interpretation  of that position is largely in line with Rebecca’s though.

I might be interested in working on this, maybe to run a short POC.  Started to write some thoughts now on a public doc :)

Thanks for starting that doc! I added some comments from my response to Ozzie.

One of the biggest challenges is in finding strong people to do it. Generally the people qualified to do strong evaluation work are also qualified to do grant funding directly, so just go and do that. It's hard to do evaluation well, and public writeups present a bunch of extra challenges, many of which aren't very fun.

If people here have thoughts on how we can scale public evaluation, I'd be very curious. 

 

Some miscellaneous thoughts:

  • EA jobs are (still) very hard to get. So if evaluation jobs were available, I’d expect them to attract very talented applicants. 
  • If evaluation work was seen as a stepping stone to grantmaking work, that would make evaluation jobs even more desirable.
  • It isn’t obvious to me where these jobs should live. It could live with the organizations running the programs, with grantmakers, or in a separate organization (a la GiveWell). I have some concerns about organizations evaluating their own programs, as the incentives aren’t great (lots of reasons to say “we evaluated our work and it looks great!”) and it’s very hard to have common standards/methodologies across orgs. Grantmakers may be hesitant to do public evaluations as it could undermine grantee relationships. I’d lean toward a dedicated evaluation organization, though that has its own problems (need a way to fund it, orgs would need to provide it with program data, etc.) 
  • I don’t know a ton about impact certificates, but I wonder if they could be a useful funding mechanism where funders and/or orgs being evaluated would pay for evaluations they find useful after the fact. 
  • It’s definitely easier to come up with an evaluation model when the org/project involved wants to be evaluated (if not, there are a lot of added complications). I wonder if it would be worthwhile for an org that wants to be evaluated to contract with a university or local group (which I think are often looking for tangible ways to get involved) to execute an evaluation, and see if that evaluation proved valuable.
  • My sense is that evaluating individual orgs/projects would be a lot easier if we had better high level community metrics in place (e.g. better understanding of EA growth rate, value of a new EA, value of a university group, etc.)

I think that the greatest value that this post serves is in giving young people some pause when they are relying on CEA-governed resources to try to determine how to live their life. Thank you.

Thanks Grace! I do think there’s a problematic degree of hero-worship (of both individuals and organizations) in the community, and would be very pleased if this post helps reduce that dynamic in any way.

(Minor, but their first name is Ruth).

My mistake, sorry about that Ruth! Thanks for flagging Linch!

No worries, I don't think inferring names from usernames is trivial! :) 

hehe no worries at all. it's confusing, but ruth grace has better SEO than just ruth :D

[anonymous]12
0
0

I wanted to add a brief comment about EA Ventures.

I think this piece does a fair job of presenting the relevant facts about the project and why it did not ultimately succeed. However, the tone of the piece seems to suggest that something untoward was happening with the project in a way that seems quite unfair to me.

For example, you say:

Personally, I (and others) suspect the main reason EAV failed is that it did not actually have committed funding in place.

That this was a big part of the issue with the project is correct, but also, the lack of committed funding was no secret!

The launch announcement included this line about the role of funders

For funders Effective Altruism Ventures is a risk-free way of gaining access to higher quality projects. We will learn about your funding priorities and then introduce you to vetted projects that meet your priorities. If you don’t like a project you are free to decline to fund it. We simply ask that you provide us with your reasons so we can improve our evaluation procedure.

Additionally, the tagline under "funders" on the website included the following:

Impact-focused backers who review proposals vetted by our partners and experts

Similarly, you attempt to show an inconsistency in the evaluation of EA Ventures by contrasting the following paragraphs:

When piecemeal evaluations have surfaced, they’ve offered conflicting evidence as to why EAV failed. In a 2017 comment thread, EAV co-founder Kerry Vaughn wrote: “We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.”

Vaughan has also suggested in 2017 that “Part of the problem is that the best projects are often able to raise money on their own without an intermediary to help them. So, even if there are exciting projects in EA, they might not need our help.” That explanation seems quite different from the original three reasons he supplied; it also seems easy to prove by listing specific high quality projects that applied to EAV but were instead funded by others.

But you fail to note that the comment cited in the second paragraph was in reply to a comment from the first paragraph!

I was merely responding to a question about how it can be the case that the project received fewer exciting projects than expect while also having a harder time funding those projects than expected. There's nothing inconsistent about holding that while also holding that the three reasons I cited are why the project did not succeed.

Overall, I think EA Ventures was probably a worthwhile experiment (although it's hard to be certain), but it was certainly a failure. I think I erred in not more cleanly shutting down the project with a write-up to explain why. Thanks for your assistance in making the relevant facts of the situation clear.

the lack of committed funding was no secret!

FWIW, while EAV was running I assumed there was at least some funding committed. I knew funders could decline to fund individual projects, but my impression was that at least some funders had committed at least some money to EAV. I agree EAV didn’t say this explicitly, but I don’t think my understanding was inconsistent with the quotes you cite or other EAV communications. I’m almost positive other people I talked to about EAV had the same impression I did, although this is admittedly a long time ago and I could be misremembering.

 

you attempt to show an inconsistency in the evaluation of EA Ventures by contrasting the following paragraphs…

I don’t think there’s a logical inconsistency between the views in those two paragraphs. But I do think that if “the best projects are often able to raise money on their own” was a significant factor, then you should have mentioned that in your original list of reasons why the project closed. Similarly, if a lack of committed funding was “a big part of the issue with the project”, that should have been mentioned too. Taking a step back, this all points to the benefit of doing a proper post-mortem: it’s a place to collect all your thoughts in one place and explicitly communicate what the most important factors are.

 

the tone of the piece seems to suggest that something untoward was happening with the project in a way that seems quite unfair to me.

My overall take on EAV is that it was ill-conceived (i.e. running a grantmaking project without committed funds is a mistake) and poorly executed (e.g. the overly elaborate evaluation process, lack of transparency, and lack of post-mortem). I think these problems fall under the umbrella of “sometimes stuff just goes wrong and/or people make mistakes” (though I do believe failure to do a post-mortem had problematic repercussions). To the extent I implied these issues resulted from “something untoward” I apologize. 

That said, the shifting narratives about the project definitely rubs me the wrong way and I think it’s legitimate to express frustration around that (e.g. if during the project you say the quality and quantity of projects exceeded expectations and after the project you say “the number of exciting new projects was smaller than we expected” I really think that warrants an explanation.)  

 

I think EA Ventures was probably a worthwhile experiment (although it's hard to be certain)

In my opinion, EAV could have been a worthwhile experiment, but since lessons from  EAV weren't properly identified and incorporated into future projects, it is better characterized as a missed opportunity. 

We've now released a page on our website setting out our approach to moderation and content curation, which partly addresses one of the points raised in this post. Please feel free to share any feedback in comments or anonymously.

Thanks for publishing that Max, and for linking to it from CEA's strategy page. I think that's an important improvement in CEA's transparency around these issues.

If you have information or anecdotes that relate to this analysis, you can share them anonymously via this form. Thanks to an anonymous EA who DM’d me with the suggestion to set up this form.

A couple of additional thoughts on CEA problems that aren’t specifically related to community building work, but that presumably negatively impacted that work.

  • An EA who would like to be anonymous reached out to tell me that “CEA was notorious for failing to pay contractors or promised reimbursements in a timely manner” and that “my last confirmed data point for massive neglect was 2018… hopefully they're doing better now.” Based on this person’s familiarity with the EA community, I’m very inclined to trust this information.
  • I think it’s something of an open secret that for years CEA either responded very slowly or not at all to many emails or other inquiries. I experienced this personally and know multiple other people who experienced it too. As one relevant data point, a comment on CEA’s yearend “CEA is fundraising for 2019” post that in part read “I think I will be most impressed if 2019 is a year of doing the basics well… For example,  staff respond to emails within 48 hours…” received more karma than the post itself (46 vs. 29 at time of writing). My sense is that this problem has probably gotten better since current management took over in 2019.

Was and is there going to be any accountability for the cultish aspects of things like the Pareto fellowship? That sounds absolutely bizarre. 

Re: "was there any accountability", I’m not aware of any other than CEA now acknowledging this project on their Mistakes page. That page makes it pretty clear that the Pareto Fellowship was not the main reason why the staff who ran it no longer works at CEA (“For a variety of reasons mostly unrelated to this program, neither the staff who directly ran the program nor the management staff who oversaw it still work at CEA.”) 

I’m pretty sure the “management staff who oversaw it” was Kerry Vaughan, who was in charge of EA Outreach. Assuming that’s correct, it doesn’t seem like there was much accountability there either. Despite Pareto’s problems, he continued to be given important responsibilities like EA Funds (launched a few months after Pareto’s closing was announced).

As another data point suggesting there hasn't been accountability, CEA promoted and funded the 2018 EA Summit which was run by one of the Pareto founders.

Re: “is there going to be any accountability” I’d encourage you (and anyone else who cares about this) to support and participate in meaningful governance conversations. The best way to ensure there’s accountability for future projects that go off the rails like this is to have clear standards and expectations about what should happen in those cases. You can comment on this thread and/or communicate your concerns directly to CEA leadership (which can be done anonymously). I'm not sure it's practical or desirable to have any accountability for the Pareto Fellowship at this point, but we can make sure there's accountability if/when future projects have similar problems.

[anonymous]-7
0
0

Can you please elaborate (or link to somewhere with info)?

[This comment is no longer endorsed by its author]Reply

It's discussed in the OP. You'll find further links there.

"In response to a comment that “multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run” CEA staff reiterated that an evaluation was “forthcoming”, but it was never published."

 

...

Oops, sorry. OP is long and I only glossed over it.

Curated and popular this week
Relevant opportunities