DM

David_Moss

Principal Research Director @ Rethink Priorities
9677 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
4

EA Survey 2024
RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
638

Agreed with Jamie's points above.

A couple of additional points:

  • We have estimates of the numbers of people who have heard of EA on specific campuses from 2022 (from the study referenced here).
    • These suggest that the percentages who have encountered EA at elite universities are considerably higher.
    • Even so, many of the people who've encountered EA, clearly barely know what it is (they've heard about it from someone else on campus or some such). As such, they definitely shouldn't be considered as people who've been outreached to and failed to engage. Very few people have ever encountered EA outreach and the probability of engaging, conditional on encountering outreach, may be much higher.

  • We have data comparing where people have heard of effective altruism for the general population and for the EA population (from 2020-2021).
    • This suggests people actually in EA are much more likely to have encountered EA through a personal contact or 80,000 Hours, an EA Group etc., whereas people who have heard about EA in the general population are much more likely to have encountered it through the media or in the course of their education.
    • It seems very plausible to me that merely reading about EA in the news, or hearing about it one's classroom, is much less likely to lead to someone joining EA than if they encounter actual outreach encouraging them to join EA and presenting them a way in (though this likely also reflects differences between the groups encountering EA or not). We've observed before that in-person routes into EA (personal contacts and EA Groups) seem particularly important for some demographics groups.[1]
 EA SurveyGeneral populationGap
Personal contact16%4%-12%
80,000 Hours13%0%-13%
Book, article or blog post9%22%13%
LessWrong8%0%-8%
I don't remember8%35%27%
EA group8%0%-7%
Podcast7%7%0%
SSC5%1%-5%
TED Talk5%1%-4%
GiveWell3%0%-2%
Education2%30%28%
GWWC2%0%-2%
Facebook1%0%-1%
Search engine1%0%-1%
Vox1%0%-1%
OFTW1%0%-1%
Swiss group1%0%-1%
TLYCS0%0%0%
EAG/EAGx0%0%0%
ACE0%0%0%
Other9%0%-9%

We'd be happy to repeat either of these surveys if people are interested.

  1. ^

    The results are directionally similar if you exclude those who "Don't remember", which is much more common for the general population.

Thanks Ben!

I think we should be able to do this[1] and I agree it seems interesting. It looks like Google has quite high favourability and Bill Gates personally (I did not see recent numbers for the Gates Foundation), reasonably high, but lower.

  1. ^

    Though there is some complication around whether we're looking at the sentiment of (i) people who've been assessed as truly being aware (most comparable to our attitudes among those who have already heard question, but would need an additional test for those respondents), (ii) people who've not claimed awareness (most comparable to our attitudes among those who have not heard question but less relevant).

Nice idea! 

I tried this in a recent conversation of mine and it opted to write about data analysis instead. It's interesting, but not surprising, that what it was interested in writing about varied so much based on the preceding conversation.


Your data reveals something fascinating: "find the most effective ways" outperforms "ways to maximize your impact" by about 0.35 points, despite both essentially describing optimization. This gap hints at something deeper about how we process language and ideas.

I think "most effective" works better because it's inherently comparative and concrete. It implies there's a landscape of options that can be ranked, studied, and chosen between. It suggests a search process, almost scientific in nature. "Maximize your impact," while mathematically similar, feels more abstract - it's what you do after you've already chosen your path, not the exciting process of discovery itself.

There's also something about the word "find" that matters. It frames the reader as an explorer or researcher, not just an executor. Compare "find the most effective ways" to a hypothetical "use the most effective ways" - I suspect the former would still win. The discovery framing makes people feel like investigators rather than followers.

This pattern appears everywhere. Consider how "evidence-based medicine" caught on while "scientific medicine" didn't, or how "growth mindset" spread faster than "belief in malleable intelligence." The winning framings tend to be those that make abstract concepts feel tangible and position the audience as active participants rather than passive recipients.

The most successful framings seem to offer both intellectual clarity and emotional resonance - they make you feel smart for understanding them while also making you feel capable of action. "Most effective" does both: it's precise enough to feel rigorous but accessible enough to feel achievable.

Thanks Micaella!

We'll give some thought to the best way to approach this. One complicating factor is that the LMICs are very heterogeneous (e.g. we might expect substantial differences between the results for Brazil, China, and Afghanistan). 

Overall, before looking into this more, I can note that all LMICs make up about 7% of the total sample, and about half of these are Brazil, India, China and the Philippines.

Thanks Oscar!

Unfortunately, I don't think that simply excluding smaller countries would be a valid approach. This would lose important data and potentially unfairly exclude smaller countries that are consistently over (or under) performing. And it would potentially distort the true relationship between different predictors and EAs per capita, when we're trying to interpret the pattern of results.

Below, I've shown the rank-order correlations between EAs per capita across years, which are reasonably strong- there is some real consistency, as I noted above- but not amazing, as we see in my plot above.

I think, on the whole, simply looking at the highest EAs per capita countries within a given year is a risky endeavour for the reasons above, and it's better to look at patterns across years and across the full range of countries. 

For example, here are the countries in the top 10 across years, excluding the most recent year (this is from an earlier private report we did).

As you can see, there's some consistency, but also a lot that would likely be misleading if you only looked within a given year's data. Many of these countries go from literally 'top 10 EAs per capita' one year to 0 EAs the next.

Thanks!

Yes, we might do a separate post about EAs per capita across years. But, as we've commented previously, the metric risks being very noisy. So, when you are looking at the countries with the highest EAs per capita, within any single year, you will often see some smaller countries appear to be enormous over-performers, and then not the next, when there's only 1 or 2 respondents difference either way.

Thanks for your question. Yes, these show the distribution for E2G people only (otherwise these plots could not inform us about the E2G question).

Donation_w shows something like 60% (no y scale so not sure) of population don't give at all, is that right?

Only 12.8% are literally donating $0. But a larger percentage are donating close to $0 (31% donating <$500, 38.3% donating <$1000). And around 10% give $20K or more?

You can tell from the median of $2000 that 60% of people are not donating $0. The 60th percentile is around $4000.

20.7% were giving $20,000 or more.

I am trying to figure out how Christians can be reached in terms of eval, especially because EA messaging really isn't landing too well in my conversations with Christians. 


This sounds like a question that might benefit from testing how Christians (though, of course, this is a heterogeneous group) respond to different messages about charity or about evaluation specifically.

I would also add that I think that a lot of what is distinctive about EA is not to do with charity evaluation specifically, but to do with wanting to maximize impartially. Many, many groups want to know whether their charities of interest are pursuing their (narrow) goals effectively. Many fewer want to know whether their favoured narrow goals (e.g. help the whales, help some specific village) is impartially helping (considered across all domains) to the greatest extent possible.

We can treat the 15% figure from the 2024 Rethink Priorities EA Survey results as an approximation of the current %. However, I'm sure this number varies a lot based on your definition of 'earning to give'. I.e. It's likely that some of these people are 10% pledgers who aren't actively maximising their earning potential/ exposure to upside.

 

I agree with both parts of this. 

We haven't asked directly about donation levels since EAS 2020[1]. But looking at that data (which didn't seem to vary substantially year-on-year), the median person reporting earning to give was donating 4.58% of their income and $2000 in total.[2]

That probably seems strikingly (and perhaps dispiritingly) low- only ~30% of the E2Gers are even donating 10%. This is partly explained by a lot of EAs being new to the movement (the median percentage donated among E2Gers who joined the movement between 2009-2014 is around 20% and their median donation is more like $20,000). But it does still mean that a large number of the recorded E2Gers may not actually, directly, be E2Ging any significant amount yet.[3]

  1. ^

    Due to a combination of (i) extreme pressure on space and it seeming like other questions with the highest decision-relevance to decisionmakers were most important, (ii) donation/income data is relatively sensitive, (iii) it seems like the community has other sources of aggregate level data on donations, even if we lose a lot of individual level data by cutting questions.

  2. ^

    Donation amounts are winsorized at $50,000 to make the lower end of the scale more visible, donation percentages are winsorized at 100% (note that donations might be from accumulated wealth, not annual income).

  3. ^

    This is one reason why the best time to dramatically grow the movement might have been several years ago, even if the second best time is today.

Load more