Hide table of contents

tldr; Many folks have recently pointed out that diversity is an issue in EA community. I agree. I think the EA community could benefit from the creation of subgroups based on identity and affinity to make a wider variety of humans feel welcome, heard, and safe, and to reduce ideological bias.

Note: If you agree that there is a diversity issue in EA, the top half of this post may not contain new information for you. Feel free to jump down to the Proposal to help mitigate diversity issues section below.

Background

As someone new to being involved with EA, I find myself asking myself this question:

EA’s ideas about how to do good seem unique and unconventional (e.g. longtermism). Are they this way as a result of uniquely clear and farsighted thinking, or a result of being out of touch with reality?

It’s hard for me to answer this question with confidence. Lack of ideological (and demographic) diversity is a means by which the latter possibility (that EAs have an unusually biased view of reality) could manifest.

Signs of a diversity problem

Demographic uniformity

  • communities that are whiter and more male than the broader population
  • lack of older members
  • members seem to be disproportionately from wealthy backgrounds
  • members disproportionately college grads/phds from elite universities
     

Ideological uniformity

  • bias toward solving problems via debate (in which the winner is often viewed as more right than the loser, even if loser has valuable information to add)
  • wide variation in upvotes/response rates on EA forum. Unpopular topics/views sometimes get little attention and receive little feedback on EA forum
  • a natural tendency to believe ideas presented with those who have high status within the group over ideas
  • use of quantitative estimation to create a false sense of precision when discussing uncertain events
  • bias towards ideas that are largely unpopular in society, and even among many altruists, like utilitarianism, longtermism, and technocratic utopianism
  • some topics like AI alignment are frequently featured in EA forums, whereas similar topics like AI ethics are rarely seen (possibly more due to ideological divides more than pure utility)
  • bias towards problems with technology-oriented solutions (AI alignment) and against problems with human-oriented solutions (politics, racism)

Proposal to help mitigate diversity issues

  • Encourage and facilitate the creation of subgroups based on affinity/identity, where populations in the minority can have a majority voice
  • Conferences where speakers from these subgroups are featured
  • Add info to subforums page about these different groups, for observability
  • Encourage people to tag forum posts with all relevant affinity/identity groups, so it’s possible to view breakdowns by group. That said, encourage posting in the same top-level forum so these ideas are discoverable

Example groups

Identity

  • women in EA
  • lgbtq in EA
  • PoC in EA
  • EA buddhists (or other religion)

Affinity

  • EA longtermists
  • EAs for short-term causes
  • EAs for AI ethics
  • EAs at startups
  • EAs for social justice
  • EA democrats (or other political group)

Shared interests or life situations

  • new to EA
  • EAs with kids
  • EA singles
  • EAs in software (or other occupation). (I believe this example already exists as a forum topic)
  • EAs who are middle aged or older
  • EAs from (part of world)
  • EAs who speak X first language

Proposal details

Implementation

I’d like to see a subgroup founder/facilitator toolkit that provides the following info for anyone who wants to form a new group:

  • a guide to help inexperienced EAs through the process of becoming organizers
  • tips for how to recruit members to your subgroup
  • tips for discoverability (e.g. how to explain your group to new members and how to become visible to all EAs who may be interested)
  • advice on how to facilitate (virtual & in-person) meetings in a way that is safe for members and encourages effective communication
  • advice on getting funding for events (or how to raise money internally to the group)
  • advice on measuring the effectiveness of your group
  •  guidelines for being an “official” part of EA; some amount of alignment with the larger goals of EA should be required
     

Benefits

  • making it easier to recruit and attract non-EAs from different demographic groups
  • giving folks with similar views/backgrounds/identities a community and a shared voice
  • helping EA interface in a healthy way with EA-adjacent folks who belong to various other groups with overlapping missions
  • making it possible (eg via subforums) to easily discover what issues are most important to various subgroups, to help counteract blind spots
     

Final note

This my first post on EA (for draft amnesty day). If you have criticism or meta-feedback about how to make a good post, that is more than welcome! I want to know how to communicate as effectively as possible on this forum. Thanks!

10

0
1

Reactions

0
1

More posts like this

Comments7


Sorted by Click to highlight new comments since:

There is a directory on the EA UK website with different identity/affinity groups.

Thanks for sharing these David! I‘m finding them personally helpful. (It appears at least some of these groups are open to people in US as well.)

I’d be curious if there’s a person to talk to about consolidating this list from the UK site with what’s available on the community page  (https://forum.effectivealtruism.org/community#online) mentioned below.

I also notice that, of those groups mentioned on the UK site, there’s only one umbrella group for all underrepresented racial and ethnic groups, and it only has 165 members. Underrepresented populations have some things in common, but there’s a lot that they don’t, so I hope that as years go by those groups  grow and differentiate.

In addition to the groups listed on the forum that Holly mentioned, here is a long list of EA Facebook Groups you could check out (this list is quite dated so many groups might be inactive). 

Unrelated to the broader issue of EA's lack of demographic diversity, there are several groups for various religions in EA (and other demographic groups / coalitions, like parents). Not sure where to find a centralized list off the top of my head.

Thanks for this info Pete! If anyone does know of a list, or about the process by which such groups got created I would be curious.

[anonymous]4
1
0

See 'Community' in the sidebar then 'Online groups.'

Thank you Holly!

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since