DM

David_Moss

Principal Research Director @ Rethink Priorities
6992 karmaJoined Aug 2014Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities and currently lead our Surveys and Data Analysis department. Most of our projects involve private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
488

Our data suggests that the highest impact scandals are several times more impactful than other scandals (bear in mind that this data is probably not capturing the large number of smaller scandals). 

If so, it seems plausible we should optimise for the very largest scandals, rather than simply producing a large volume of less impactful scandals.

The relatively high frequency of people with high satisfaction temporarily stopping promoting EA (and the general flatness of this curve)


Agreed. I think that people temporarily stopping promoting EA is compatible with people who are still completely on board with EA, deciding that it's strategically unwise to publicly promote it, at a time when there's lots of negative discussion of it in the media. Likewise with still promoting EA, but stopping referring to it as "EA", which also showed high levels across the board.

I think the prevalence of these behaviours points to the importance of more empirical research on the EA brand and how it compares to alternative brands or just referring to individual causes or projects (see our proposal here). I think it's entirely possible that the term "EA" itself has been tarnished and that people do better to promote ideas and projects without explicitly branding them as EA. But there's a real cost to just promoting things piecemeal or using alternative terms (e.g. "have you heard of "high impact careers" / "existential security"?"), rather than referring to a unified established brand. So it's not clear a priori whether this is a positive move.

I was surprised that for the cohort that changed their behavior, “scandal” was just one of many reasons for dissatisfaction and didn’t really stand out. The data you provide looks quite consistent with Luke Freeman’s observation: “My impression is that that there was a confluence of things that peaking around the FTX-collapse..."

Agreed. One possible explanation, other than it just being a co-incidence of factors, is that the FTX crisis and subsequent revelations dented faith in EA leadership, and made people more receptive to other concerns. (I think historically, much of the community has been extremely deferential to core EA orgs and ~ assumed they know what they're doing come what may).

Certainly it's true that many of the other factors e.g. dissatisfaction with cause prioritisation, diversity, and elitism had been cited for a while. It's also true that even before FTX (though it still holds for 2022), people who had been in the community longer tended to be less satisfied with the community, even though higher engagement was associated with higher satisfaction.[1] While the implications of this for the average satisfaction level of the community depend on how many newer vs older EAs we have at a given time, this is compatible with a story where EAs generally become less satisfied with the community over time.

  1. ^

    Note that this is the opposite direction to what you'd see if less satisfied people drop out, leaving more satisfied people remaining in earlier cohorts. That said the linked analyses (for individual years) can't rule out the possibility that earlier cohorts have just always been distinctively less satisfied, which would require a comparison across years.

This is a neat idea, but I think that's probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway. 

Many thanks!

I think it would be very valuable to closely examine the cohort of people who report having changed their behavior...

All behaviour changes were correlated with each other (except for stopping referring to EA, while still promoting it, which was associated with temporarily stopping promoting EA, but somewhat negatively associated with other changes).

All behaviour changes were associated with lower satisfaction, with most behavioural changes common only among people with satisfaction below the midpoint, and quite rare with satisfaction above the midpoint (again, with the exception of stopping referring to EA, while still promoting it, which was more common across levels).

People who reported a behavioural change were more likely, on the whole, to mention factors as reasons for dissatisfaction. (When interpreting these it's important to account for the fact that people being more/less likely to mention a factor at a particular importance level might be explained by them being less/more likely to mention it at a different importance level, with less difference in terms of their overall propensity to mention it).

Similarly, there was no obvious pattern of particular factors being associated with lower satisfaction. In general, people who mentioned any given factor were less satisfied.

In principle, we could do more to assess whether any particular factors predict particular behavioural changes, controlling for relevant factors, but it might make more sense to wait for the next full iteration of the EA Survey, when we'll have a larger sample size, and can ask people explicitly whether each of these things are factors (rather than relying on people spontaneously mentioning them.

For the other measures, differences are largely as expected, i.e. people who made a behaviour change are more likely to desire more community change, more likely to strongly agree there's a leadership vacuum,[1] and trust was higher among people who had not made a behaviour change.

Updating analyses of community growth seems like it should be a high priority... I’ve been a longstanding proponent of conducting regular analyses of community growth..

I still agree with this, unfortunately, we've been unsuccessful in securing any funding for more analysis of community growth metrics.

  1. ^

    I personally don't put too much weight on this question. I worry that it's somewhat leading, and that people who are generally more dissatisfied are more likely to agree with it, but it's unclear that leadership vacuum is really an active concern for people or that it's what's driving people's dissatisfaction.

Thanks!

For satisfaction, we see the following patterns.

  • Looking at actual satisfaction scores post-FTX, we see more engaged people were more highly satisfied than less engaged people. In comparison, for current satisfaction, this is no longer the case or is only minimally so (setting aside the least engaged who remain less satisfied than the moderately to highly engaged). Every group's satisfaction has decreased, with moderate to highly engaged EAs' satisfaction declining to similar levels (implying a larger decrease among the more highly engaged). 
    • The pattern is roughly similar, but less clear, looking only at changes within matched subjects (smaller sample size).
  • Looking at people's recalled post-FTX satisfaction, there is no significant difference between the moderately to highly engaged (though they weakly lean in the opposite direction). So the recalled vs current comparison implies a slightly bigger positive gap for more highly engaged EAs (though we did not formally test this comparison).

For reasons for dissatisfaction, there are a few systematic differences across engagement levels:

  • More highly engaged respondents are more likely to mention Leadership
  • More highly engaged respondents were more likely to mention scandals
  • More highly engaged respondents were more likely to mention JEID at the lower importance levels (Important or Slightly important vs Very important), but it's a less clear pattern at the Very important level
  • More highly engaged responents were more likely to mention Epistemics as Very important
  • The most highly engaged respondents were much more likely to mention Funding (though still less than the top factors)
  • Looking within the most highly engaged only, this that Leadership and Scandals are at the top, followed by Cause prioritization and JEID receiving similar levels of mentions.

Does "mainly a subset" mean that a significant majority of responses coded this way were also coded as cause prio? 

 

That's right, as we note here:

The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.  

Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned "insufficient attention being paid to other causes, primarily animals and GHD" (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn't count as about Cause Prioritization per se). 

So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a 'too much x-risk/too little not x-risk' category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.

We did note this explicitly:

As we noted in our earlier report, individuals who are particularly dissatisfied with EA may be less likely to complete the survey (whether they have completely dropped out of the community or not), although the opposite effect (more dissatisfied respondents are more motivated to complete the survey to express their dissatisfaction) is also plausible.

I don't think there's any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.

Thanks Ulrik!

We can provide the percentages broken down by different groups. I would advise against thinking about this in terms of 'what would the results be if weighted to match non-actual equal demographics' though: (i) if the demographics were different (equal) then presumably concern about demographics would be different [fewer people would be worried about demographic diversity if we had perfect demographic diversity], and (ii) if the demographics were different (equal) then the composition of the different demographic groups within the community would likely be different [if we had a large increase in the proportion of women / decrease in the proportion of men, the people making up those groups would plausibly differ from the current groups].

That said, people identified as a woman or anything other than a man, were more likely to mention JEID as at last of somewhat importance, and they were also more likely to mention cause prioritization and excessive focus on AI/x-risk/longtermism as a concern. Conversely, men were more likely to refer to scandals, leadership and epistemics.

I would be even more cautious about interpreting the differences based on race due to the low sample size (the total number would be much larger in the full EA Survey), and the fact that the composition of non-white respondents as a group differs from what you would see in a 'perfectly equal demographics' scenario (i.e. more Asian, unequal representation across countries).

Hopefully in the next couple/few weeks, though we're prioritising the broader community health related questions from that followup survey linked above.

I can confirm that there's not been so dramatic a shift since the 2020 cause results (below for reference), i.e. global poverty and AI are still very similarly ranked. The new allocation-of-resources data should hopefully give an even clearer sense of 'how much of EA' people want to be this or that.

We did gather cause prioritization data in the most recent EA Survey, we just delayed publishing that report because we gathered additional cause prioritization data in this followup survey, which we ran in December. This was looking at what share of resources EAs would allocate to different causes, rather than just their rating of different causes, which I think adds an important new angle.

We stopped gathering information about donations to individual charities in 2020 as part of a drive to make the EA Survey shorter to increase participation. However, that does mean that you can access donation data more recent than 2019, in our 2020 report (and the accompanying bookdown), which reports a breakdown by charity cause area.

Load more