Hide table of contents

tldr; Many folks have recently pointed out that diversity is an issue in EA community. I agree. I think the EA community could benefit from the creation of subgroups based on identity and affinity to make a wider variety of humans feel welcome, heard, and safe, and to reduce ideological bias.

Note: If you agree that there is a diversity issue in EA, the top half of this post may not contain new information for you. Feel free to jump down to the Proposal to help mitigate diversity issues section below.

Background

As someone new to being involved with EA, I find myself asking myself this question:

EA’s ideas about how to do good seem unique and unconventional (e.g. longtermism). Are they this way as a result of uniquely clear and farsighted thinking, or a result of being out of touch with reality?

It’s hard for me to answer this question with confidence. Lack of ideological (and demographic) diversity is a means by which the latter possibility (that EAs have an unusually biased view of reality) could manifest.

Signs of a diversity problem

Demographic uniformity

  • communities that are whiter and more male than the broader population
  • lack of older members
  • members seem to be disproportionately from wealthy backgrounds
  • members disproportionately college grads/phds from elite universities
     

Ideological uniformity

  • bias toward solving problems via debate (in which the winner is often viewed as more right than the loser, even if loser has valuable information to add)
  • wide variation in upvotes/response rates on EA forum. Unpopular topics/views sometimes get little attention and receive little feedback on EA forum
  • a natural tendency to believe ideas presented with those who have high status within the group over ideas
  • use of quantitative estimation to create a false sense of precision when discussing uncertain events
  • bias towards ideas that are largely unpopular in society, and even among many altruists, like utilitarianism, longtermism, and technocratic utopianism
  • some topics like AI alignment are frequently featured in EA forums, whereas similar topics like AI ethics are rarely seen (possibly more due to ideological divides more than pure utility)
  • bias towards problems with technology-oriented solutions (AI alignment) and against problems with human-oriented solutions (politics, racism)

Proposal to help mitigate diversity issues

  • Encourage and facilitate the creation of subgroups based on affinity/identity, where populations in the minority can have a majority voice
  • Conferences where speakers from these subgroups are featured
  • Add info to subforums page about these different groups, for observability
  • Encourage people to tag forum posts with all relevant affinity/identity groups, so it’s possible to view breakdowns by group. That said, encourage posting in the same top-level forum so these ideas are discoverable

Example groups

Identity

  • women in EA
  • lgbtq in EA
  • PoC in EA
  • EA buddhists (or other religion)

Affinity

  • EA longtermists
  • EAs for short-term causes
  • EAs for AI ethics
  • EAs at startups
  • EAs for social justice
  • EA democrats (or other political group)

Shared interests or life situations

  • new to EA
  • EAs with kids
  • EA singles
  • EAs in software (or other occupation). (I believe this example already exists as a forum topic)
  • EAs who are middle aged or older
  • EAs from (part of world)
  • EAs who speak X first language

Proposal details

Implementation

I’d like to see a subgroup founder/facilitator toolkit that provides the following info for anyone who wants to form a new group:

  • a guide to help inexperienced EAs through the process of becoming organizers
  • tips for how to recruit members to your subgroup
  • tips for discoverability (e.g. how to explain your group to new members and how to become visible to all EAs who may be interested)
  • advice on how to facilitate (virtual & in-person) meetings in a way that is safe for members and encourages effective communication
  • advice on getting funding for events (or how to raise money internally to the group)
  • advice on measuring the effectiveness of your group
  •  guidelines for being an “official” part of EA; some amount of alignment with the larger goals of EA should be required
     

Benefits

  • making it easier to recruit and attract non-EAs from different demographic groups
  • giving folks with similar views/backgrounds/identities a community and a shared voice
  • helping EA interface in a healthy way with EA-adjacent folks who belong to various other groups with overlapping missions
  • making it possible (eg via subforums) to easily discover what issues are most important to various subgroups, to help counteract blind spots
     

Final note

This my first post on EA (for draft amnesty day). If you have criticism or meta-feedback about how to make a good post, that is more than welcome! I want to know how to communicate as effectively as possible on this forum. Thanks!

Comments7


Sorted by Click to highlight new comments since:

There is a directory on the EA UK website with different identity/affinity groups.

Thanks for sharing these David! I‘m finding them personally helpful. (It appears at least some of these groups are open to people in US as well.)

I’d be curious if there’s a person to talk to about consolidating this list from the UK site with what’s available on the community page  (https://forum.effectivealtruism.org/community#online) mentioned below.

I also notice that, of those groups mentioned on the UK site, there’s only one umbrella group for all underrepresented racial and ethnic groups, and it only has 165 members. Underrepresented populations have some things in common, but there’s a lot that they don’t, so I hope that as years go by those groups  grow and differentiate.

In addition to the groups listed on the forum that Holly mentioned, here is a long list of EA Facebook Groups you could check out (this list is quite dated so many groups might be inactive). 

Unrelated to the broader issue of EA's lack of demographic diversity, there are several groups for various religions in EA (and other demographic groups / coalitions, like parents). Not sure where to find a centralized list off the top of my head.

Thanks for this info Pete! If anyone does know of a list, or about the process by which such groups got created I would be curious.

[anonymous]4
1
0

See 'Community' in the sidebar then 'Online groups.'

Thank you Holly!

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier