Hide table of contents

This is our review of the Centre for Effective Altruism's progress in 2020.[1] We've also posted our plans for 2021.


I think that CEA has made good progress this year: we improved our programs while narrowing our scope. Nevertheless, I would have liked to see even more progress on groups.


Core programs:

  • Groups: Some significant improvements in the second half of the year.
    • Grantmaking: We funded 20 groups, with new grants to NYC, Cambridge, Harvard, Brown, Yale, and MIT (amongst others). We assessed 86 case studies from 2019.
    • Fellowships: Fellowships (~8-week courses) are becoming a high-quality, scaleable onboarding mechanism. We improved curricula and trained organizers. 184 people completed summer fellowships through groups (with ~430 more anticipated by the end of December), including >85 group organizers.[2]
    • Support: Support quality improved. We held >140 support calls, with an average likelihood to recommend (LTR) a call to a friend in a similar position of 9.4/10 (compared to an LTR of 7.3/10 in 2019).[3]
  • Forum: Increased pageviews by 90%, with more high-quality posts[4] than last year.
  • Events: We learned how to run quality, targeted online events. We doubled the number of attendees while keeping satisfaction steady, although the average number of hours of engagement and number of new connections fell significantly compared to our numbers from in-person events.[5]
  • Community health: Overall similar to last year. We handled cases involving conflicts between community members, other community problems, and PR cases (including shaping four major news pieces). We trained media spokespeople and monitored risk in geographic areas and research fields where EA is getting newly established.

Spin-off programs (these will be covered in more detail in an upcoming document):

  • Giving What We Can: Luke Freeman took over running the organization. In his first three months, he made GWWC’s website more cause-neutral and up to date, and saw new pledges triple (compared to the same time period in 2019).
  • EA Funds: We improved communication with donors, as well as the grantmaking process. We increased total donations by 49% to $8.4m (January-November).


Suboptimal progress on groups: Improving groups was a major focus of the year. Progress was slow in the first half of the year, partly due to strategic uncertainty. Most of the progress from groups has come in the second half of the year, and I think that our recent trajectory is solid. However, progress over the year was less than I wanted.

Failure to produce high-quality intro content: The introductory sequence we produced for the EA Forum wasn’t high-quality enough to promote widely.

Scope and strategy

We narrowed our scope and developed our long-term plans:

  • Increased our focus on recruiting students from university groups. We developed more internal clarity/discussion about this and other goals.
  • Closed EA Grants, directing applicants to EA Funds instead.
  • Successfully spun GWWC out of CEA, led by Luke Freeman. Luke reports to our board, with operational support from CEA (like 80,000 Hours).
  • Made progress towards determining the future of EA Funds: We hired Jonas Vollmer to lead the project, and we’re exploring whether and how to spin it out of CEA (again, with Jonas reporting to the board and receiving operational support).


Staff retention was 94%, compared to 83% last year and 53% in 2018. Staff are generally performing well and report high satisfaction.

We made improvements to our financial reporting, HR admin systems, governance, and grantmaking system). These improvements will also benefit 80,000 Hours, the Forethought Foundation, GWWC, and the Longtermist Entrepreneurship Project.

The following sections provide supporting evidence for this executive summary.


Community Building Grants

Community Building Grants (CBGs) allow group organizers to professionally engage in local community building activities.

Cost: $1,320,000

FTEs: 1.0 (+ 0.5, Groups Manager)


  • Funded 20 FTEs worth of grants across 20 groups, including 9 grants to first-time groups. We expect to fund an extra 1-3 FTEs by the end of the year.
  • Ran successful application rounds for NYC and Cambridge. We think that these grants likely wouldn’t have been made if we hadn’t helped the groups find organizers.
  • Case studies: We think that 10 grants in 2019 produced 64 weak cases of impact, 19 moderate cases, and 3 strong cases. We expect 2020 and 2021 grants to be better than this.
  • Support: We created a new collection of online resources for grantees.


We actively ran application rounds for EA NYC and EA Cambridge, which produced three full-time organizers (Arushi Gupta, Aaron Mayer, and Dewi Erwan). We expect location-specific application rounds to become an important source of high-value grants in the future.

New and active Community Building Grants

CBG recipient table (fixed)

Organizer case studies

We think that CBGs have contributed to notable improvements in some groups (though CBGs weren’t the only factor).

For instance, before having a full time CBG, Stanford EA was run by dedicated leaders who had somewhat limited capacity. Two years ago, the group had a small executive team and one of its main activities was running a Doing Good better discussion group. They now have a 13-person exec team, and run cause-specific programming on AI safety, governance, and biosecurity. This year, they had 195 people in their intro fellowships, including members of other groups and leaders at Berkeley and UCLA. Stanford EA members supported the development of the Stanford Existential Risk Initiative (SERI) Summer Research Fellowship, where over 250 Stanford students applied to conduct summer research related to the topic of existential risk (20 were accepted).

Kuhan (the CBG recipient) appears to be directly responsible for a lot of the improvements above, and is referenced in some of the case studies from Stanford. Kuhan seems to have been helped by two organizer retreats we ran in 2019, as well as by the funding to focus full-time on the group. However, we also think that this improvement was partly caused by support from other core EAs, a number of core EA graduate students joining, and the fact that SERI started around the time that Kuhan’s grant started.

We also think the grant provided to Emma Abele at Brown allowed her to build the group — for example, by running an intro fellowship, an in depth fellowship, and 1:1 career calls. According to the local groups survey, the group went from having no "highly engaged”[6] members (as identified by the organizer) to 37 within a year.

Member case studies

We surveyed 145 members from 10 major 2019 grants.[7] We think that these grants were a bit less promising on average than our 2020/21 grants (but we haven't done a full evaluation of our 2020 grants yet).

We judged 86 of the 145 group members to have taken significant action based on a good understanding of EA ideas, and we categorised these cases as strong, moderate, or weak based on our expectations about the counterfactual impact the group had on the individual.

We think there were 64 weak cases of impact, 19 moderate cases of impact, and 3 strong cases of impact.

What follows are some anonymized examples:

Strong impact: X

X is a technical researcher at a longtermist organization. They performed exceptionally well in a top technical undergraduate degree, and in technical competitions. They received an internship offer from top companies in their priority cause area, and from other prestigious technical companies.

X estimates that they attended 20+ of their group’s events in the last 12 months. They report that a group retreat led them to make new friends, become more involved with their local community, and then go on to relevant technical workshops and 80,000 Hours coaching. They think that this exposure to the EA community made them think more about their career and made it more likely that they would work in this field.

Moderate impact: Y

Y is planning on a two-year master’s in a technical subject at a top university. They were recently offered a grant to work on community building at that university. After their master’s, Y plans to enter a PhD program and work as a researcher on a priority area. They estimate that they have spent 40 hours per month engaging with their group over the last year.

Y is a member of a top university group. Y said that they would probably have pursued a lower-impact path if they hadn’t engaged with the group.

Weak impact: Z

Z works at an EA-adjacent organization. They report selecting their current job on the basis of EA principles and ideas, and think they wouldn’t have considered this option if they hadn’t engaged with their group. They have taken the GWWC pledge. They estimate that they spend 4 hours per month interacting with the group.

Improved onboarding

Some grant recipients said they would have liked a more thorough onboarding process at the start of their grant period. We worked with Gabriella Overroder (EA Sweden) and Eirin Evjen (EA Norway) to develop a collection of online resources for grant recipients (including strategy and operational advice and useful contact information) and set up fortnightly calls for new grantees with an experienced grantee.


We didn’t set clear enough expectations about what support we can provide. Some grantees expected and desired more post-grant support than they received. This may have prevented grantees from exploring further ways to find support or support each other.

We spent longer than we anticipated redesigning our impact survey, and we're not sure that it was a big improvement on what we had before. Fewer people filled out the survey than we had expected; we will send it to a significantly greater number of group members next year.

We raised our bar for making new grants in 2020, and we’re likely to increase it further in 2021.

Group Support

We help local group organizers by advising them, providing resources they can use, and creating online spaces where they can share resources and support each other.

Cost: $200,000

FTEs: 2.0 (we also often work with volunteers or CBG recipients)

Progress was relatively slow in the first half of the year, but has accelerated since then:

  • Fellowships: At least 184 people completed summer fellowships through groups (with ~430 more anticipated by the end of December). We provided training and advice for people running fellowships, updated a version of the introductory fellowship, and are working on updates to the advanced fellowship.
  • Satisfaction: 63% of organizers rate themselves from ‘somewhat satisfied’ to ‘very satisfied’ with CEA support. 2% were dissatisfied with support, and the remainder were neutral or claimed not to have received support. We think this is a signal that we should invest more in support.
  • Events: Co-hosted 4 online events for organizers, almost all of whom thought they were useful, and 33% of whom thought they were very useful.
  • Support calls: Held >140 calls with >100 organizers (calls received an average recommendation score of 9.4/10) and responded to >300 written enquiries from group organizers. 99% of organizers found this service useful (30% found it very useful).

Group size and engagement statistics

From our 2020 EA groups survey:

  • We tracked ~250 total active groups across 42 different countries in the EA groups survey, compared to 176 groups in 2019.[8]
  • Group leaders reported ~12,300 people who came to at least one group event in 2020, compared to ~14,500 people who came to at least one group event in 2019.
    • In contrast, these numbers grew from ~2,000 to ~2,500 for the university groups we’re focusing on.
    • We expect these numbers have dropped due to COVID.
  • Group leaders reported 892 people who engaged with 30-100 hours of content through group activities in 2020, and 341 people who engaged with 100+ hours of content. (We didn’t ask a comparable question in 2019.)


Fellowships (typically reading/discussion groups with an application process, lasting a few months) are a common group activity, and one with the potential to provide a detailed and accurate overview of key ideas. We thought that there were some opportunities to improve the fellowship (e.g. by updating the readings and adding more exercises).

Improved curriculum: Joan worked with Huw Thomas, Alex Holness-Tofts, and James Aung to make improvements to the issues identified above. Most of the (external) reviewers thought that the updated fellowship was an improvement from the previous version. We are now testing this fellowship with 6 focus university groups (~200 students), and we hope to make further improvements based on organizer feedback before deciding whether to share it with all groups.

Facilitator training: In August we ran a training session, in conjunction with experienced fellowship facilitators, for 36 organizers who were planning to run a fellowship (likelihood to recommend 8.5/10). There have been 19 introductory fellowships this the second half of the year.

In total, 184 people participated in five introductory fellowships over the summer, and we anticipate that around 430 people will attend an introductory fellowship this fall (~200 of them using the new fellowship curriculum at focus universities like Yale and MIT).[9]

Additionally, 179 people participated in EA Oxford’s In Depth Fellowship over the year. We are contracting part-time with Will Payne (EA Oxford organizer) to develop it further.

Organizer training

We improved our onboarding process: all prospective organizers receive a personal email, are asked to schedule a call with Catherine Low, and are sent key resources.

At least 85 organizers attended a high-quality EA fellowship to improve their understanding of EA ideas, 60 of whom attributed their attendance to our promotion of the fellowships.[10] We think this is important because it means that they will have a good understanding of EA ideas to share with their groups.

We began a small trial of a mentorship program which matched experienced organizers with core EAs who used to run groups, as well as professional coaches. If these matches prove valuable, we plan to expand this service.

We had ~340 attendees across online training events/meetups that we held over the course of the year,[11] compared to ~90 who attended our (more intensive) retreats in 2019. Average likelihood to recommend the training was 8.3/10. 48% of organizers attended an event, almost all of whom thought it was useful, and 33% of whom thought it was very useful.

Minor grants

We provided $30,784 for non-salary expenses across 38 groups. This was much lower than usual due to COVID. Projects funded include a forecasting tournament run by UChicago Effective Altruism, and retreats in Zurich, Australia, New Zealand, Yale, and Denmark.

We also revamped our funding systems, so that applicants have more information about what types of activities we are interested in funding, and so that evaluation and payment are faster.


We spent more time updating and creating resources for organizers. We created five new pages on the EA Resource Hub, and updated 15 key pages (with help from volunteers).

Particularly important were:

  • The Virtual Events Guide, which helped organizers transition to online events
  • A guide to introductory presentations, with links to videos and scripts
  • A publicity guide which answers some of the most common questions we had, and highlights the importance of accuracy and the risks of publicity

We had 14,000 page views on the EA Resource Hub, which includes pages written by CEA staff and pages written by volunteers, and the number of active users tripled from January to November (we started tracking this in January, so some of this might be a seasonal effect). 61% of organizers used it, and 97% of these organizers found it useful.

We use the monthly EA Groups newsletter to increase organizers’ knowledge of EA ideas and EA community building strategy. 97% of recipients find this content useful and 15% find it very useful. We increased subscriptions to the EA Groups newsletter by 15% over the year, but the click-through rate fell by around 4% relative to 2019. Most organizers found the newsletter moderately useful.

62% of organizers use our Slack channel. We have 942 total members and 170 weekly active members.


We believe we provided faster and higher-quality advice for groups via message and support calls, compared to previous years.[12]

We received and responded to 310 messages that required significant time/effort to address (plus many minor messages) from Q1-Q3. Nearly all responses were within 2 working days. 41% of organizers used this service, and 99% of those that did found it useful (30% very useful).

We had ~140 calls with 97 different organizers, weighted towards organizers we thought were particularly important to support. We had 18 calls with people planning to start new groups. When the organizer was new or the call instigated by the organizer, we sent a feedback form afterwards. Organizers gave the calls a recommendation score of 9.4/10. However, only about 32% of organizers have used this service: we think that scaling up calls might improve organizer satisfaction and help organizers to have more impact.

Examples of support:

  • We’re providing an orientation and regular support calls for leaders piloting the new intro fellowship at Oxford, Cambridge, Stanford, Yale, MIT and Brown.
  • Matt Reardon (CEA contractor and previous Harvard Law School EA organizer) is supporting existing law school EA groups and exploring the viability of starting new law school groups.
  • We had onboarding calls with new organizers at UC Berkeley, and provided advice and funding to support succession planning at MIT and Yale.
  • We’re reaching out to potential organizers to start a group at Georgetown.
  • Case study: Until this year, the University of Melbourne group (no. 1 university in Australia, 32nd globally) hadn’t had any interaction with CEA nor much interaction with EA Australia/New Zealand. After interacting with Catherine, the organizers participated in an introductory fellowship and focused their strategy on higher-impact activities.


Groups capacity:

  • Catherine Low, who has a lot of experience supporting groups and has reliably provided high-quality advice, joined as a contractor.
  • As noted above, we brought on several CBG recipients as contractors, to provide resources and support for other groups. We plan to continue to do this.

In a couple of cases, organizers were dissatisfied with the level of support we gave them. In some cases, we also think we should have offered more support than we did. We also should have done more to make sure that organizers knew about what we offer (e.g. support calls), since organizers weren’t always aware of the resources they could access. We are investigating what other improvements we should make in 2021.

Effective Altruism Forum and online content

The Effective Altruism Forum (EA Forum) aims to be the central place for collaborative discussion about how to do the most good.

Cost: $340,000

FTEs: 2.5

The Forum’s key metrics grew markedly, and we think post and discussion quality improved.

  • We think the Forum is getting more high-quality posts than last year, but around the same number of excellent posts.
  • Total pageviews[13] (our main metric) grew by 90%. We estimate that we facilitated 80,000 hours of engagement with high-quality EA content.[14]
  • Mean user satisfaction score of 7.4/10

Forum content

The number of high-quality posts --- those the team considers especially likely to spark action or impart useful knowledge --- doubled compared to 2019.[15] However, we think the Forum is getting around the same number of excellent posts.

We created a sequence of introductory articles. It was a mistake to try to generate original content as part of this sequence: we spent a lot of time on edits and the final material was less compelling than the other material cited. We have not felt confident enough to share the introductory sequence widely, and it has not had as many views as we hoped.

We promoted "Ask Me Anything" sessions with top thinkers such as Toby Ord, Owen Cotton-Barratt, and Elie Hassenfeld, and it seems that more top thinkers (e.g. Carl Shulman and Will MacAskill) are occasionally commenting on the Forum.

In our judgement, some of the most important posts of the year were:

We think that many of these posts would have been created without the Forum, but the Forum helped them to get a wider audience: many saw thousands of pageviews. You can see the top posts by karma for this year and the previous year here.


Monthly active users increased by 30%, pageviews are up by 90%, and daily votes grew by ~65%. The number of comments has been increasing faster than the number of posts, indicating that the average post gets more engagement than it used to.

Forum users gave a mean satisfaction score of 7.4/10. Lower-scoring users reported a mix of technical complaints (e.g. about the previous text editor, which we’ve now replaced) and concerns about the quality/subject matter of some posts and comments. We had several complaints that Forum users and moderators were too politically progressive (especially around issues of diversity and inclusion), and a similar number of complaints that they were insufficiently progressive.[17]

Individual case studies

At least 13 people report that the Forum led them to apply to new jobs or research positions (at least three of these were hired); at least eight people applied to EA Global, EAGx, or another EA event; and at least 50 people changed their minds on a subject that they consider important. These are self reports from a series of surveys and user interviews which pulled in ~200 unique users. This represents ~10% of the Forum’s readership, but a higher percentage of our most active users.

Michael Aird has been a very active user over the past year. Michael credits the Forum with helping him become more engaged and says that he would advise someone like his earlier self to focus on the Forum today (rather than learning about EA from many different sources), thanks to the growing number of tags and collections. He says that writing on the Forum has substantially increased his visibility within EA. In 2019, he received 2 job offers from EA organizations out of 30 applications, and in 2020 he received 4 offers from 8 applications; he thinks this may have been in part because writing on the Forum made employers more aware of his work.

Zach Groff was already engaged with animal welfare work, but has pivoted towards global priorities research:

I may have been ambivalent in the past, but as time has gone on (and social isolation has set in), I've realized [the Forum] plays a key role in the EA community and has remarkably high-quality dialogue for an online forum.

The Forum’s first EAGx talk seems to have brought in many new authors. Angela Aristizabal learned about the Forum by watching the talk:

Thanks to [the EA Forum EAGx talk], I did a post about geographic diversity in EA. Thanks to the comments, I ended up meeting some people from Brazil and then started working with them in a project for an organization called Generation Pledge.

In the initial application, 7 of the top 21 Charity Entrepreneurship (CE) applicants came from the EA Forum; the Forum was also the most frequent source of applicants who made it to the second round. Even though due to multiple factors, none of the actual attendees of the incubation program reported hearing about it through the EA Forum, CE considers the strength of the initial application pool a strong signal, and believes that the EA Forum was an excellent source.


New features developed by, or in collaboration with, LessWrong:

  • We tweaked our algorithm so that the homepage places more weight on karma and less on recency, which improves the quality of posts on the frontpage.[18]
  • Improved editor with support for tables, better image support, etc. (in the last few months, this has saved us a lot of time responding to queries from authors).
  • There are now tags, which help users to find content on particular topics and will soon serve as an early-stage “EA wiki” (In 2021, we plan to update many tags to include detailed content on their respective topics.)
  • We built a feature to hide karma scores in response to feedback from a single (VIP) user. When we asked them to trial the feature, they decided that they didn’t want to use it. We will conduct outreach with a wider range of users before building features in the future.

Ongoing experiments to increase engagement:

  • Talks: Aaron gave talks at EAGxVirtual and the EA Student Summit to encourage people to post to the Forum. We estimate that at least a dozen people chose to write their first Forum posts as a result.
  • Increasing posts: Aaron has given feedback on more than 80 draft Forum posts, per his standing offer. We awarded 25 Forum Prizes to incentivise the production of high-quality content.
  • Promotion: We saw a sharp increase in people coming to the Forum through social media early in the year, but we failed to consistently get social media posts out, and that statistic remained flat for the last few months. This was a mistake.[19]
  • SEO: We put a lot of effort into improving the Forum’s rankings in search engines (which were unusually low). We got some improvements from this, but in retrospect, the improvements weren’t worth the effort we put in.

EA Newsletter

We have focused most of our effort on the Forum, and we have delegated more work on the EA Newsletter, social media content, and EAG transcripts to a contractor. The newsletter’s engagement has held relatively steady as we’ve devoted fewer total staff hours to it.

As a direct result of receiving the newsletter, six people took the Giving What We Can pledge or signed up for Try Giving, and nine people applied to an EA-related job or research position.


Events enable attendees to make new connections, learn about core concepts, share and discuss new research, and coordinate on projects.

Cost: $560,000

FTEs: 3.5

In total, we hosted 4,252 attendees across six online events. Their average LTR (likelihood to recommend the event to a friend with similar interests) was 8.2/10, and they made 4.6 new connections[20]. We estimate a total of ~14,500 hours of engagement in total (excluding EAGxAsia-Pacific).

Throughout the year, we learned how to run high-quality online events and how to make our events more goal- and user-focused.

We also ran the Virtual Coordination Forum for 30 attendees.[21] 85% said it was a better use of time than what they would have done if they hadn't attended.

Event statistics

Events table (fixed)

Notes on metrics

LTR: Average likelihood to recommend to the event to a friend, out of 10

Average new connections: “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?”

Hours of content: A conservative estimate, based on data from our event platform, of how many total hours attendees spent viewing content. We used different platforms for different events, so we aren't confident in these numbers.

Estimated total new connections: New connections multiplied by attendees.

Total hours of engagement: For 2019, we multiplied the number of total attendees by the total length of all event-related activities. (EAGx events are generally shorter.)

EA Global: Virtual

We switched to an online event with less than a month’s notice. We learned a lot about the differences between live and virtual events (e.g. we needed more interactive elements).

  • 456 attendees activated their profile on the networking app.
  • Attendees reported 4.4 new connections on average, compared to 7.2 at EA Global: San Francisco 2019.
  • LTR was 7.8: the norm for in-person EA Global conferences is 8.2-8.5.
  • 89% of respondents reported that they felt welcome, in line with previous results.

We learned this year that 7 new Charity Entrepreneurship founders/staff were referred to them from this event or from EA Global events in 2019.


Collaborating with EAGx organizers from around the world, we ran the largest-ever EA conference.

  • 1,389 attendees from over 60 countries. For many, this was their first chance to attend an EA event.
  • Attendees reported 4.9 new connections on average (self-reported in our post-event survey). Our networking app showed over 10,000 connections and nearly 2,500 meetings.
  • LTR (likelihood to recommend the event to a friend with similar interests) was 8.4 (in line with in-person events)
  • Attendees viewed ~5,000 hours of content. One third of attendees reported changing their mind about something, and half reported that they changed their plans.

We had some minor issues with our application systems, which we fixed.[22]

Quotes from attendees:

  • "I matched with Kuhan Jeyapragasan and he is the president of Stanford EA. We will probably collaborate together with UC Berkeley EA in the future and either share speaker events or host an online fellowship."
  • "The most valuable thing for me as someone new to the EA was to see the diversity of people and ideas and experiences in EA. Before, I had doubts if it's for me, if I fit here, but now I feel much more confident and comfortable about identifying as a part of this community and being more active in it."

Introductory events

We collaborated with university group organizers in Europe and the USA to host two high-quality EA introductory events, featuring a talk and Q&A with Will MacAskill, Habiba Islam, and Joan Gass. The introductory talks were followed by 44 different discussion groups so that attendees (very often new to the EA community) could immediately meet with local community members.

  • 867 attendees
  • LTR for the events: 8.4
  • LTR for effective altruism:
    • 8.8 before the event
    • 9.0 after the event
  • 48% (144) of attendees who filled out the survey said this was their first EA event

The most common words they used to describe EA after the event were: rational, thoughtful, compassionate, effective, and analytical.

EA Student Summit

The EA Student Summit aimed to onboard students to EA and increase their engagement.

  • We accepted 987 students, of whom 759 registered for the event.
    • We also accepted 193 “EA professionals” who spoke at the event and/or were available to meet with and mentor student attendees.
  • Feedback from survey respondents (N = 192):
    • LTR: 8.0
    • The average respondent had 6 one-on-one meetings (much better than our goal, which was 2), and reported knowing 4.4 new people in the EA community as a result of attending (our goal was 3).
    • 12 respondents reported making a major plan change, and 78 reported a minor change; 102 reported making an important connection (related to a future career opportunity, funding opportunity, or collaboration).
  • Of registered students who shared demographic information:
    • 61% reported their gender as male.
    • 55% reported their ethnicity as white or European origin.
    • We did targeted outreach and advertising to underrepresented groups, which seemed to lead to a more diverse attendee base.

Quotes from attendees:

  • “I found out there were 6 others also from the University of Edinburgh at the summit, and I connected with all of them. We're now in the process of setting up an EA student society here.”
  • “[At a meeting with an ambassador] I had a misconception that a PhD in ML was needed to get into AI policy, and he informed me that a Master's is probably the ideal level of education. This has made me consider AI policy as a serious candidate for which career I will take up.”
  • “I met with [name withheld] for a general conversation about my potential career plans. He had some good suggestions about what I should do on my placement year, and invited me to the Cambridge applied maths EA conference which I think will give me even more ideas.”


EAGxAsia-Pacific, held virtually in November, evolved out of in-person conferences planned by organizers in Singapore and Australia. The Asia-Pacific region includes both established EA communities and burgeoning new communities and organizations, and we hoped to spotlight EA work in the region. Since time zones and long travel times pose a major barrier to intermingling between EAs on different sides of the world, we also aimed to promote connections between community members in the Asia-Pacific region and community members in other places. Unlike our previous virtual events, the event was scheduled for convenience to Asia-Pacific time zones. 526 people attended the event.

As the event was held recently, we do not yet have an analysis of the outcomes.


EAGx organizers retreat

EAGx organizers felt that the retreat helped them build relationships with other organizers and learn best practices for event planning. One said: “I have learned so much and I feel fully equipped and I know I can ask if I have a question without feeling stupid.” Attendees' average LTR score was 9.6/10.

This led to better collaboration between EAGx teams through the EAGx Slack workspace, and allowed us to work together effectively on EAGxVirtual and EAGxAsia-Pacific.

Virtual Coordination Forum

30 attendees (mostly leaders and experienced staff from established EA organizations) came together to coordinate around two topics: EA’s relationship to longtermism and EA’s target audience. 85% of survey respondents said the Virtual Coordination Forum was “a better use of time, compared to what they would have done if they hadn't attended” (slightly less than the 93% last year). They also reported increased knowledge and understanding of other attendees’ views about the two focus topics.

Next time, we’ll circulate documents further in advance. We also hope to hold the next event in person, which will allow for more casual small group discussions.

Community health

The Community Health team aims to reduce key risks to the EA community’s future. Their work includes fostering a good culture, improving diversity, mitigating harm done by risky actors, reducing the harm of negative PR, and identifying risks to early field-building.

Cost: $300,000

FTEs: 2.5

The team’s activity level remained relatively stable in 2020, despite increased capacity.

  • Risky actors: Handled 21 new risky actor cases, 8 follow-ups on known risky actors, and 21 other cases (mental health, organizational health, community conflicts).
  • PR:
    • Damage control on 4 major cases and 75 minor cases
    • Media training for four potential “EA spokespeople”
  • Early field building: Monitored risks in a few key geographies and cause areas, intervened in a PR-sensitive early field building issue.
  • Diversity, equity, and inclusion: Gathered data about the experiences of EAs from underrepresented groups, advised other EA organizations about DEI, and supported the groups and events teams on better service to community members from underrepresented groups.

While we have included summaries of all the types of work the community health team does, we’re limited in the specifics we can give because the cases often involve information that is sensitive or personal.


Proactive: We provided media training to several community members (Cassidy Nelson, Will Bradshaw, Luke Freeman and Jade Leung) working in newer areas of EA, or areas where we especially value clear communication about complex subjects. This involved media training, advice on assessing media requests, and referring some low-risk requests for practice. Luke is now providing spokesperson support and training for GWWC members.

Reactive: We monitored 137 potential PR cases. One of the major cases from the year:

  • In October, a Washington Post magazine piece about EA that has been in development since 2018 was published. Based on experience with past coverage of EA, we were concerned that the piece would include inaccurate or sensationalized depictions of EA. Before publication, interviewees contacted us because the journalist planned to quote them in ways that didn’t reflect their current views on sensitive topics. We advised interviewees on communicating with the journalist. The journalist cut or modified the quotes to make them more accurate, and the resulting piece was mostly positive. CEA, 80,000 Hours, and the interviewees used the Post’s comment section to share high-quality resources for readers who wanted to learn more about EA.

PR metric: We learned about 78% of interviews before they took place.[23] The earlier we learn of an interview, the more proactive help we can give on mitigating risks.

Risky actors

The goals of this work are to reduce the ability of individuals to cause harm to others or to EA’s reputation, and to support individuals who have been harmed. We used to call this area “bad actors”, but moved to using “risky actors” to acknowledge that the harm is sometimes unintentional.

Of the 25 most most significant risky actor cases:

  • In 8 cases, we think we substantially reduced a risky actor’s ability to cause harm.
  • In 8 cases, we took action or advised someone else, but believe there was no reduction in risk or weren’t able to tell whether we reduced risk (for example, we’re unsure whether the people we advised will act differently based on the advice).
  • In 9 cases, we did not take significant action (because we decided the case was outside our scope, because we decided a legal risk outweighed the benefit of acting, or because there was no obvious next step to take).

Examples of cases:

  • Advised a local EA group on handling a concern that a relationship between two members included abuse.
  • Advised an organization on how to handle a case where a volunteer was accused of a crime.

Early field-building

  • Biorisk: An EA planned to do a Twitch stream interview about GCBRs on a channel with 500,000 viewers. We referred them to someone with experience communicating about biorisk for advice.
  • Nicole conducted calls and risk consultations with organizers in several countries where EA is developing.

Diversity, equity, and inclusion

  • The goal of DEI work is to build a healthy community where people from different backgrounds feel supported, respected, and valued as they pursue altruistic goals. Much of CEA's work in this area happens via the groups and events teams, with advice from Community Health.
  • Event attendees this year, particularly at the Student Summit, were significantly more demographically diverse than the overall EA community, and online events were accessible to more geographic areas.
  • Overall, attendees from underrepresented groups report having at least as good an experience at events as other attendees. We compared LTR scores for attendees of different demographics. Attendees of color gave more positive responses than white attendees at the Student Summit and EAGx Virtual and less positive responses for EA Global Virtual. Women and nonbinary attendees gave more positive responses than men at EAGxVirtual, very similar responses at EA Global Virtual, and less positive responses at the Student Summit. We don't yet have data processed on EAGxAsia-Pacific.
  • We had 30+ calls with underrepresented group members about EA experiences and interests. Calls reflected largely positive experiences in EA alongside a desire for more connections, a stronger sense of belonging, and greater representation in EA for people from a wider variety of backgrounds.
  • On some surveys, we began collecting data about language background, first generation university student status, financial stressors, and other factors someone may find meaningful to their experience in EA. All questions are optional.
  • We've advised the Operations team on integrating DEI best practices into CEA's future hiring processes, and advised several other EA organizations on DEI questions.
  • CEA's statement on diversity, equity, and belonging includes more of our work and views on this area.

Examples of other work

  • Conflict (10 cases): Advised a local group organizer about handling conflict between organizers and about giving feedback to another organizer.
  • Organizational health (5 cases): A startup in the EA space asked us for advice on handling an accusation that one of their staff harmed another community member in the past. We advised them to talk to an employment lawyer and advised on gathering information about what happened while respecting the confidentiality of the victim.
  • Mental health (9 cases): Several people contacted us concerned about an EA who seemed possibly suicidal, unsure whether or how they should intervene. We talked them through what support we thought it made sense for friends to provide, and advised on when to seek professional help/call 911.
  • Culture: Advised moderators of EA Facebook groups on setting clear expectations for their groups to reduce online conflict.
  • Examples of public resources written:


We added Nicole Ross to this team, but some team members had reduced capacity for much of the year, so effective capacity did not increase despite our plans to do so. This contributed to this area remaining relatively stable, despite our hopes to improve it.

We wish we had drawn more distinction earlier between DEI (diversity, equity, and inclusion) work, PR work, and work on conflict about social justice and free speech. For example, we categorized some online conflicts or PR cases as DEI issues. In hindsight, we underestimated how much staff time and attention some cases would take. This diverted staff from more proactive work that we think would have better served the community overall, and especially EAs and potential EAs from underrepresented groups.

During a year with much tension around social justice and free speech, we think we sometimes misjudged the balance. For example, in advising the Munich group about hosting Robin Hanson as a speaker, we should have highlighted the costs of cancellation in our advice and we didn't correctly anticipate the amount of alarm/backlash the cancellation would cause. See criticism here and our response.

We have recently shifted to focus more on epistemics and culture, and we're considering additional focus on some tractable parts of PR, such as research on effective branding for EA. We also developed theories of change for our program work.


We aim to set and track clear goals and to recruit, support, and retain staff.

Cost: $930,000 (this also includes general expenses like office costs, office food, and online services)

FTEs: 3.25


  • Clarified CEA’s 2021 goals and improved our ability to track and gather data.
  • Hired and onboarded Jonas Vollmer and Luke Freeman to run GWWC and EA Funds. One further hiring round is in progress, and we streamlined and documented our hiring process.
  • Drafted new team values. Implemented two experiments for improved professional development. Updated compensation policy.
  • Streamlined processes like performance reviews, quarterly planning, and all-staff meetings.
  • Enhanced support for staff during the pandemic. Includes extra online content, more frequent check-ins.

Executive priorities

Max focused on hiring for and spinning out EA Funds and GWWC, and on developing a clearer scope and goals for CEA (as well as lots of management/reactive work). Joan split her time evenly between managing community health, managing groups work, and developing metrics and surveys to assess our impact. The results of this work are mostly covered elsewhere in this report.

Major reflections from Max:

  • I think that we made two very promising hires for EA Funds/GWWC, and that we were somewhat lucky that we did so quickly.
  • I should have been much clearer about what people’s goals and remits are. I think this would have allowed us to operate more efficiently. For 2021, each of our managers has a clear goal, and each goal has a single person responsible for it.



This year, we hired the following people full time:

  • Jonas Vollmer (Head of EA Funds, performance discussed below)
  • Luke Freeman (Executive Director of GWWC, performance discussed below)

New contractors include Helena Dias (grant administrator), Kashif Ahmed (tech support), and Catherine Low, Huw Thomas, James Aung, Alex Holness-Tofts, and Matt Reardon (groups).

Alex Barry (group support associate) left CEA. Our retention rate was 94%.

Job satisfaction

  • Average morale over the year was 6.7/10 (question: “How are you feeling about work this week?”, compared to 6.1 in 2019), We consider this good given the impact of COVID.
  • 78% of staff felt that their role was a good fit for their skills and interests, while 22% felt that their role was a medium level of fit, or that there were aspects of strong fit and aspects of weaker fit.
  • Average satisfaction with managers was 6.0 out of 7, and alignment with managers 6.1 out of 7.
  • 78% of staff unreservedly recommend CEA as a place to work (and 22% would likely recommend, but with some caveats about the role or the job seeker’s preferences).

People operations


  • Drafted/updated templates and advice for hiring rounds:
    • More of our understanding of best practice is now documented.
    • We’ve made the steps and accountable parties clearer.
    • We’re testing the new process now, and we’ll make further updates.
  • We streamlined, clarified, and tested our onboarding processes.

Staff support:

  • Drafted new team values mid year. Staff seemed pleased with the new values, and they seem to have shifted behaviour, in particular causing more hypothesis testing.
  • Implemented two experiments for improved professional development.
  • Tested changes to our performance review process.
    • Reduced time spent on reviews, whilst still giving staff useful feedback and catching major issues.
  • Enhanced support to staff during the pandemic (e.g. extra online content, more frequent check-ins).
    • Online team events didn’t go as well as in-person retreats. We plan to reduce frequency, make them more optional, and run a greater variety of activities.
    • We should have used the same questions and scales when surveying staff: this made it harder to compare different events/activities we ran.
  • Streamlined quarterly planning and staff meetings.
    • These both take much less time with similar results.

Public communications

We shared a series of posts on our 2019 work and our plans for 2020, as well as a mid-year update. We also rewrote our 'Mistakes' page.


The Operations team aims to provide the finance, legal, HR administration, grantmaking, office management and fundraising support that enable CEA, 80,000 Hours, Forethought, and GWWC to run efficiently.

Cost: $620,000

FTEs: 5.25 (3 employees, 2.25 contractors)

Key metrics:

  • Estimated 2020 turnover managed: $21.3M ($10.3M in operational spend, $11M in grants)
  • Total balance sheet managed:[24] $39M
  • Staff supported: 85 (41 on payroll, 44 contractors)
  • Average staff satisfaction with ops support: 8.7/10
  • 273 grants made, with average grantee satisfaction of 4.3/5

We made improvements to our financial, grantmaking, and fundraising systems. We improved our international governance structure, and we’re on track to move into a new Oxford office at the end of the year.


We overhauled financial systems to increase efficiency and clarity, and revised several of CEA's financial and accounting policies.

We invested funds to generate a return of $126,000 in interest and $1.8M in investment returns (for the Carl Shulman discretionary fund), while freeing up money held in old restrictions.


We paid out over $11M of grants in 2020, on behalf of EA Funds, Community Building Grants, and the Forethought Foundation.

We implemented a new grants management system, which has streamlined our processes, increased compliance, and reduced turnaround time. However, the first release of the system had an overly complicated grantee user interface that harmed their experience. We’ve since made the system easier to use by cutting down the number of questions grantees need to answer, improving the interface, and fixing an issue where an application form was timing out on users.

Key stats:

  • Time from recommendation to payment: 16 days on average (some of which is when we’re waiting on information from the grantee)
  • Ops time for each additional grant: 13 minutes
  • Average grantee satisfaction score: 4.6/5


We implemented a donor management system to improve our understanding of our donors. We also managed end-of-year reporting and outreach to previous donors and promising leads.

Office management

As discussed in our last report, we closed our Berkeley office.

We are on track to move to an improved Oxford headquarters in December. This is expected to improve wellbeing, productivity, and recruitment for staff at CEA, Forethought, EA Funds, FHI, and the Global Priorities Institute (GPI).


  • HR administration: We systematized our onboarding system, which improved the experience for people joining CEA and saved us time.
  • Governance: We reached a decision on a new international structure, and we are beginning to implement it.


Overall, we expect to fall around 15% under our $6.02M budget. This is mainly due to the impacts of COVID on travel, events, and groups. Another contributing factor was that we made fewer hires than expected.

Finance table (fixed)

Spend varies across projects, and the biggest differences were as follows:

  • Groups Support: $320K underspend due to decreased hiring and grantmaking
  • Events: $640K underspend as a result of COVID-19 and moving events online
  • Community Health: $180K underspend as a result of reduced hiring
  • Leadership and general costs: $100K underspend due to delay in moving to new Oxford office and closing US office

  1. Note that this document mostly covers work from January to October. ↩︎

  2. We weren’t tracking or providing training for this before the summer, so this doesn’t include some spring fellowships. ↩︎

  3. These figures include only calls that Alex, Catherine, and Katie had. We believe we reached more organizers this year, but had fewer calls with each organizer on average. The samples are slightly different (2019 was weighted towards CBG recipients, whereas 2020 was only for groups that requested a call and for new groups). ↩︎

  4. Those that the team considers especially likely to spark action or impart useful knowledge. ↩︎

  5. See events section for more details.

    • We had 4,252 attendees (compared to ~1,555 last year)
    • Our events had an average LTR of 8.2/10 (compared to 8.5/10 for in-person events last year)
    • Attendees reported an average of 4.6 new connections per virtual event (compared to 8.4 for in-person events last year). These new connections are relatively significant: we ask people “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?”
  6. They have spent 50 hours or more engaging with effective altruism content. For example, someone who has read 50 posts or articles related to EA; listened to 10 episodes of the 80,000 Hours podcast; and participated in an introductory effective altruism fellowship.

    In addition, effective altruism ideas and principles played a major role in them doing at least one of the following:

    • Choosing where to donate
    • Developing their career plans
    • Volunteering for 2 or more hours a week on effective altruism-related projects.
  7. Grants of 0.5 FTE or more for 12 months or more. ↩︎

  8. We promoted the EA Groups survey more this year, but we also think there has been an increase in the number of groups. ↩︎

  9. We think that there were also fellowships we weren’t aware of in the first half of the year, but we weren’t tracking them properly then. ↩︎

  10. 48 organizers participated in introductory fellowships (by Yale and Stanford). 85 organizers participated in an advanced fellowship run by EA Oxford, 60 of which could be attributed to our promotion of the fellowship. We’re not sure how much these groups overlap. ↩︎

  11. These figures don’t include talks or sessions we ran at conferences. ↩︎

  12. Improved response times and reliability for messages, and higher LTR for calls (9.4/10 vs 7.3/10). ↩︎

  13. This metric refers to pageviews from logged-in users, rather than all pageviews. We use the former because it is less sensitive to shallow engagement (e.g. an article trends briefly on Reddit, generating a lot of views from people who aren’t very interested in EA and won’t stick around). ↩︎

  14. We recorded 25,000 hours of engagement from logged-in users. Another 15,000 comes from projecting engagement rates back to before we began to measure engagement time, and 40,000 comes from us projecting engagement time to non-logged-in users. ↩︎

  15. Weekly average of 11.3 vs 4.7 in 2019. The average also nearly doubled from January to August 2020 (when we stopped counting).

    We identified 9 posts that we thought were “borderline good" and sent the list to several advisors and heavy Forum users. None of them strongly thought that we should have included as "good" a post that we excluded or vice versa. They also substantially disagreed with each other about the ordering of the posts, which is some evidence that the posts we considered borderline were of similar quality.

    We found that the number of views of high-quality posts was highly correlated with total views, and marking posts (as high-quality or not) was relatively time intensive, so we stopped marking posts in August, and are instead focusing on total views. ↩︎

  16. See also their posts on promising career ideas outside their priority paths and why to consider a wider range of options ↩︎

  17. Sample quotes from user surveys:

    “The speed and aggressiveness with which people get downvoted for thinking along the lines of "diversity/inclusion are important" is really worrying, even though this usually rebounds a bit later.”

    “I have seen comments with polite but blunt disagreement called out by moderators when they oppose [points associated with the social justice movement], where much more extreme comments or posts on the opposite side go unchallenged.” ↩︎

  18. Visitors can customize the homepage by weighting how much they want to see different topics. ↩︎

  19. We also maintain weekly digest emails, as well as monthly open & welcome and progress threads. ↩︎

  20. “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?” ↩︎

  21. This replaces the Leaders Forum that we ran in previous years. ↩︎

  22. For instance, several people were frustrated by the number of questions on the application form (we’ve now removed some), and the requirement to pay by card (which is less common in some countries — we have now enabled PayPal). We also brought on a contractor for additional technical support. ↩︎

  23. We weren’t consistently tracking this last year. ↩︎

  24. Across all restrictions in the US and UK. ↩︎

Sorted by Click to highlight new comments since:

Thanks to everyone at CEA for all the hard work you are putting in to improve our community! 🙂

I’m curious to learn more about the following:

We invested funds to generate a return of $126,000 in interest and $1.8M in investment returns (for the Carl Shulman discretionary fund), while freeing up money held in old restrictions.

$1.8M in investment returns for a fund that initially started at $5M is quite high - that's roughly a 36% return in a year. How was that investment return achieved? Also, this is the first time I’m hearing about this discretionary fund. Are there any reports of payouts made by this fund, or what the plans are for it for the future?

It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.

The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be noted that I only recommend grants for the fund that I think aren't a better fit for other funding sources I can make recommendations to, and often with special circumstances or restricted funding, and grants it has made should not be taken as recommendations from me to other donors to donate to the same things at the margin. [For the object-level grants, although using donor lotteries is generally sensible for a wide variety of donation views.] 

Got it, thanks for the context!

I'm curious if you have a target % return for this fund per year with your investing, and what your target % return is for your personal funds for donation? I also wonder if you think EAs you know perform better with their investment returns than the average investor.

Hi Brian, Thanks for your question! I’m not sure how much we can comment on the investment strategy or grantmaking of this fund, but I’ll flag your questions to Carl.

Thanks for publishing this very thorough review Max! I read most of it and the community building grants and group support sections were particularly important and useful for me to know, though the other parts were also useful to read.

I have a few questions which one of you at CEA may want to answer, and I’ll split these into different comments, so that people can reply separately to each question:

1. Regarding this line in your section on community building grants:

We judged 86 of the 145 group members to have taken significant action based on a good understanding of EA ideas, and we categorised these cases as strong, moderate, or weak based on our expectations about the counterfactual impact the group had on the individual.

I'd like to learn more about how CEA or the CB grants programme categorizes these cases into strong, moderate, or weak impact? I think there is a lot of value in community builders, especially CB grantees, having a better understanding of what CEA considers to be impactful (and how you measure it). This way, this prevents CB grantees from being very positive about what CEA thinks is just a weak case of impact, or grantees thinking that something is moderate impact, but CEA thinks it's strong impact. This then allows community builders to focus on generating more moderate or strong cases of impact, although of course they should not Goodhart (i.e. optimize too hard in a way that hampers the group).

I also understand that examples of impact (and therefore evaluating these examples) can vary widely across different group types (national, city, or university) and those in different countries  (i.e. EA Philippines vs. EA London), but I'd still like to hear more about it.

 In my head, I think that CEA should be measuring two things when trying to measure the impact of a group on its members: 
a) How high is the expected value of the action or career plan change that the person has taken

b) How counterfactual the impact of the group is on the person

The two things above can then be combined so that a case can be classified as strong, moderate, or weak impact.  I'd like to know if what I wrote above on high expected value + the degree of it being counterfactual is aligned with CEA and/or the CBG programme's thinking on evaluating these cases of impact. If you think though this information is too sensitive to share on the forum, then you can just send it to me and/or other CB grantees privately (or let me know if Harri will release a writeup on this for community builders in 2021). Thanks!

Hi Brian, thanks for your question, and I’m glad the update was useful!

You’re correct about the overall approach we’re using (multiplying the expected value of the change by how much of that change is attributable to the group). I’ll flag this comment to Harri and he might follow up with some more details, publicly or privately.

Got it, thanks Max!

3 months late, but better than never: it's incredibly inspiring to see how the community has grown over the past decade.

Curated and popular this week
Relevant opportunities