Hide table of contents

 

By: Katie Gertsch

 

The annual EA Survey is a volunteer-led project of Rethink Charity that has become a benchmark for better understanding the EA community. This post is the second in a multi-part series intended to provide the survey results in a more digestible and engaging format. Important to bear in mind is the potential for sampling bias and other considerations outlined in the methodology post published here. You can find key supporting documents, including prior EA surveys and an up-to-date list of articles in the EA Survey 2017 Series, at the bottom of this post. Get notified of the latest posts in this series by signing up here.

 

Summary

  • EAs remain predominantly young and male, though there has been a small increase in female representation since the 2015 survey.

  • The top five cities with the highest concentration of EAs include the San Francisco Bay Area, London, New York, Boston/Cambridge, and Oxford.

  • The proportion of EA’s that identify as atheist, agnostic, or non-religious came down from 87% in the 2014 and 2015 surveys to 80% in the 2017 survey.

  • The number who saw EA as a moral duty or opportunity increased, and the number who saw it as an only an obligation decreased.

 

Age


The EA community is still predominantly represented by a young adult demographic, with 81% of those giving their age in the EA survey falling between 20 and 35 years of age[1]. This year, ages ranged between 15 to 77, with a mean age of 29 and a median age of 27 (and a standard deviation of 10 years). The histogram below shows a visual representation of the distribution of ages.

[1] Ages were calculated by subtracting the self-reported birth year from 2017.

 

Gender

The survey respondents were male by a wide majority. Of the 1,080 who answered the question asking how they self-identified regarding gender, 757 (70.1%) identified as male, 281 (26.01%) identified as female, 21 (1.9%) respondents identified as “other”, and another 21 respondents preferred not to answer. This is similar to the 2015 survey, which had a 73% proportion of males.



Consistent with the results of the previous survey, the US and UK are main hubs for EA, home to the majority (63.4%) of this year’s surveyed EAs. Additionally, the top five countries by population (US, UK, Germany, Canada, and Australia) from the 2015 survey remain the top five countries again in 2017. Australia and New Zealand both dropped ranking slightly, and we saw a small increase of EAs living in Northern European countries, such as Germany, Denmark, Sweden, the Netherlands, and the Czech Republic. Representation from Continental Europe overall rose from 14% to 18%.

The San Francisco Bay Area (which includes Berkeley, San Francisco, Oakland, Mountain View, Menlo Park, and other areas) remains the most populous area for EAs in our survey for this question, but only outnumbers respondents from London by a very small margin. This gap between London and the Bay Area has shrunk substantially from 2015.

 

Oxford, Boston/Cambridge (US) and Cambridge (UK) all show consistently high populations of EAs. Washington D.C. dropped from the fifth most densely populated EA city to eleventh. Newly reported additions include Berlin, Sydney, Madison, Oslo, Toronto, Zürich, Munich, Philadelphia, and Bristol.

The proportion of atheist, agnostic or non-religious people is less than the 2015 survey. Last year that number was 87% compared to 80.6% this year. That metric hadn’t changed over the last two surveys, so this could be an indicator that inclusion of people of faith in the EA community is increasing.


As noted in 2015, it has been suggested that greater efforts should be made on the part of EA to be more inclusive of religious groups. The numbers definitely still show room for growth in religious communities.


The distribution of responses regarding a stance on moral philosophy is extremely similar to the last survey. In 2015, 56% selected Consequentialism (Utilitarian), 22% No opinion or not familiar with these terms, 13% Non-utilitarian consequentialism, 5% Virtue Ethics and 3% Deontology. Among respondents, the distribution of philosophical stances has not noticeably changed.

 

Do they see EA as an opportunity or an obligation?

 

This question was inspired by Peter Singer’s classic essay on whether doing a tremendous amount of good is an obligation or an opportunity, which inspired commentary by Luke Muehlhauser (see this post) and Holden Karnofsky (see this post), among others. Perhaps even more than a preferred moral philosophical stance, this helps us get a view to the participants’ motivation to be effective altruists.

 

The 2015 survey posed this question a little differently, presenting the choices as ‘Opportunity,’ ‘Obligation,’ or ‘Both’ instead of ‘Moral Duty’. Both surveys included ‘Other’ as a choice as well. About the same proportion chose ‘Both’ in 2015, as those who selected ‘Moral Duty’ this year. We could guess that there was a richer connotation understood by ‘Moral Duty’, over the more narrow, and somewhat negatively biased ‘Obligation’ option.

 

From 2015 to this year, those who saw EA as only an opportunity stayed the same, while those seeing it only as an obligation decreased significantly.

 

By offering ‘Moral Duty’ as a response, we may have given those who see participating in EA as primarily a dutiful action, a more neutral (less negative) and/or more principled (less self-focused) match to their personal interpretation.

 

Credits

Post written by Katie Gertsch, with edits from Tee Barnett and analysis from Peter Hurford.

 

A special thanks to Ellen McGeoch, Peter Hurford, and Tom Ash for leading and coordinating the 2017 EA Survey. Additional acknowledgements include: Michael Sadowsky and Gina Stuessy for their contribution to the construction and distribution of the survey, Peter Hurford and Michael Sadowsky for conducting the data analysis, and our volunteers who assisted with beta testing and reporting: Heather Adams, Mario Beraha, Jackie Burhans, and Nick Yeretsian.

 

Thanks once again to Ellen McGeoch for her presentation of the 2017 EA Survey results at EA Global San Francisco.

 

We would also like to express our appreciation to the Centre for Effective Altruism, Scott Alexander via Slate Star Codex, 80,000 Hours, EA London, and Animal Charity Evaluators for their assistance in distributing the survey. Thanks also to everyone who took and shared the survey.

 

Supporting Documents

EA Survey 2017 Series Articles

I - Distribution and Analysis Methodology

II - Community Demographics & Beliefs

III - Cause Area Preferences

IV - Donation Data

V - Demographics II

VI - Qualitative Comments Summary

VII - Have EA Priorities Changed Over Time?

VIII - How do People Get Into EA?

 

Please note: this section will be continually updated as new posts are published. All 2017 EA Survey posts will be compiled into a single report at the end of this publishing cycle. Get notified of the latest posts in this series by signing up here.

 

Prior EA Surveys conducted by Rethink Charity (formerly .impact)

 

The 2015 Survey of Effective Altruists: Results and Analysis

The 2014 Survey of Effective Altruists: Results and Analysis

 

Raw Data

 

Anonymized raw data for the entire EA Survey can be found here.

Comments11


Sorted by Click to highlight new comments since:

I think it would be useful to frontload info like 1) the number of people to took this vs. previous surveys, 2) links to previous surveys.

I think I would also prefer mildly strongly if all of the survey results were in one blog post (to make them easier to find), and prefer it strongly to have all the results for the demographic info in the demographics post. But is seems like this post doesn't include information that was requested on the survey and that seems interesting, like race/ethnicity and political views.

The proportion of atheist, agnostic or non-religious people is less than the 2015 survey. Last year that number was 87% compared to 80.6% this year. That metric hadn’t changed over the last two surveys, so this could be an indicator that inclusion of people of faith in the EA community is improving. (bolding mine)

I would recommend changing "improving" to "increasing", since I don't think the opinion that increasing the proportion of people in EA that is religious is good is universal.

I think it would be useful to frontload info like 1) the number of people to took this vs. previous surveys, 2) links to previous surveys.

Thanks for bringing these to our attention, Claire. I like both of these ideas. This post will be updated to include the former, and the latter will be included in all subsequent posts for ease of navigation.

I think I would also prefer mildly strongly if all of the survey results were in one blog post

We decided to go with a multi-part series because the prior survey ended up being an unwieldy 30+ page PDF, which likely resulted in far less engagement. As I said above, in all subsequent survey posts we'll link to the previous articles for ease of navigation.

But is seems like this post doesn't include information that was requested on the survey and that seems interesting, like race/ethnicity and political views.

This is probably an oversight on our part. It's likely we will revise the article to include some or all of this information very soon.

I would recommend changing "improving" to "increasing", since I don't think the opinion that increasing the proportion of people in EA that is religious is universal.

+1, will edit that. The first handful of posts will be more descriptive, but you can expect future ones to inject a bit more commentary

I don't think there is a difference between a moral duty and an obligation.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

I don't think there is a difference between a moral duty and an obligation.

I admit that I'm personally confused about this too.

-

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller?

Yes, the 2015 survey had 2352 EAs, whereas we had only 1837 responses this year. Keen eye for catching that! It's something we've been thinking a lot about. I do not yet know whether this is a significant finding or indicative of EA getting smaller. I think we intend to write more about this soon.

The difference is that the term obligation has a more negative valence than duty.

I don't think there is a difference between a moral duty and an obligation.

I'm not entirely sure that I would agree with this. I'm supposed to be publishing more survey content on the Forum at the moment, so parsing this out may have to wait, but obligation to me feels relatively more guilt-driven, and being duty-bound seems to invoke a more diverse set of internal and external pressures

At any rate, if it's not clear here, it's certainly not good as a survey question.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

Could be! May also be indicative of year-on-year survey fatigue though. We'll be revamping the survey for 2018 to make it a better experience in general

As a non-native speaker, I find it particularly difficult to understand the difference between "moral duty" and "obligation". And I've travelled in the US for half a year and have taken some extra English classes, so I'd expect that many/most other non-native speakers won't see any difference between the terms.

In addition to how people "think" about EA as an "opportunity" or "obligation" (and FYI I for one would have been unclear if I saw both "moral duty" and "obligation"), I'd be interested to see how many people "feel" like EA/A is an obligation as opposed to an opportunity.

Interesting results. I personally do like the moral duty option - I think it does have a pretty different connotation than an obligation. Obligation suggest something forced upon you by outside forces, while moral duty suggests something done out of a sense of responsibility, but more joyfully and consciously chosen.

I'm just wondering why Muslim is not an option for the religious beliefs question? This seems like a silly oversight since it is a major religion.

It actually was an option (see the survey here). I suspect they left it out of the results because nobody chose it.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Recent opportunities in Building effective altruism