DM

David_Moss

Principal Research Director @ Rethink Priorities
8415 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
574

I think these questions are relevant in a variety of ways:

  • Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
    • For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
    • In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
    • Knowing the overall level of awareness of EA in the population (the 'top of the funnel') also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
    • Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers' views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
  • Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
    • For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
    • It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
    • Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
  • Knowing about public sentiment towards EA seems directly relevant for outreach.
    • For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won't elaborate too much on this since it seems relatively straightforward.
  • Knowing about difference in sentiment across groups is also relevant.
    • For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.

One move which is sometimes made to suggest that these things aren't relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn't suggest that broader public attitudes are not important. 

  • For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
  • The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we're interested in.
  • I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
  1. ^

    As a practical matter, it's also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes. 

We didn't directly examine why worry is increasing, across these surveys. I agree that would be an interesting thing to examine in additional work.

That said, when we asked people why they agreed or disagree with the CAIS statement, people who agreed mentioned a variety of factors including "tech experts" expressing concerns and the fact that they had seen Terminator etc., and directly observing characteristics of AI (e.g. that it seemed to be learning faster than we would be able to handle). In the CAIS statement writeup, we only examined the reasons why people disagreed (the responses tended to be more homogeneous, because many people were just saying ~ it's a serious threat), but we could potentially do further analysis of why they agreed. We'd also be interested to explore this in future work.

It's also perhaps worth noting that we originally wanted to run Pulse monthly, which would allow us to track changes in response to specific events (e.g. the releases of new LLM versions). Now we're running it quarterly (due to changes in the funding situation), that will be less feasible.

Addressing only the results reported in this post, rather than the survey as a whole:

  • How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
  • What people's attitudes towards effective altruism are, among those who have encountered it
  • What people's attitudes are towards effective altruism (when described) among those who have not encountered it
  • How these differ across different subgroups
  • And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)

I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).

We agree these would be valuable surveys to conduct (and we'd be happy to conduct them if someone wants to fund us to do so). But they'd be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.

Also why didn't you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?

("Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware)."

This comparison wouldn't strictly make sense for a few reasons:

  • The permissive vs stringent classifications are not about whether people know more about EA, but about our confidence, based on their response, that the person has encountered EA. So a very specific response, which reveals clear awareness of EA, but which was overtly factually mistaken could count as stringent, whereas a less specific response which leaves it less clear that the person has encountered EA might only reach the bar for permissive.
  • The two categories are not independent. Every stringent response also passes the bar for the permissive categorisation.
  • A response which referred to a connection between FTX/SBF and EA would be sufficient to meet our stringent classification, because if the person knows about such a (putative) connection, then they have clearly encountered EA (even if their overall conception might be very limited or mistaken). This means that the stringent category is particularly likely to contain people aware of FTX and more than half of the stringently classified respondents who expressed a negative sentiment about EA mentioned FTX.
  • Considering the two groups as independent, there are only 34 and 39 exclusively permissive and stringent respondents, respectively, meaning small sample sizes for a comparison of the two groups.

I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.

I believe all of that is true, but at the same time, I’m almost certain we’ve lost significant credibility with key stakeholders... Friendly organisations have explicitly stated they do not want to publicly associate with us due to our EA branding, as the EA brand has become a major drawback among their key stakeholders

 

I definitely agree this is true, just not sufficient in itself to mean that movement building for EA is impossible or less viable than promoting other ideas (for that we'd need to assess alternative brands/framings).

Agree that this is likely explained by people thinking they recognise the familiar terms and conflating it with the Humane Society or other local Humane Societies. We didn't include specific checks of real awareness for The Humane League or other orgs and figure on our list, because they weren't key outcomes we were interested in verifying awareness of per se and survey length is limited. They were included primarily to provide a point of comparison (alongside a mixture of fake, real but very low incidence, and real and very common, items), and to allow us another check by assessing whether responses were associated with each other in ways that made sense (i.e. we would expect EA-related terms to show sensible associations with each other, charities in general to be associated with each other, and tech-related items to be associated with each other).

Based on google trends, I'd expect The Humane League to be a bit less well known than GiveWell, and the Humane Society to be much more well known.

Great talk, thanks!

The thing is, broad awareness of EA is still really low—around 2%.  This is from research that was done last summer between Rethink Priorities and CEA, and Breakwater. They found even though in specific groups that we care about, like some elite circles, it might be higher on the whole awareness of EA, it’s just still very low.

Agreed with this. 

That said, I'd also add that sentiment is still positive even among those who have heard of EA

Our research on elite university students (unpublished but referenced by CEA here), also found that among those who were familiar with EA, only a small number mentioned FTX.

I was indeed trying to say option a - that There's a "bias towards animals relative to other cause areas," . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that's often impractical and not my point here.

 

Thanks for clarifying!

  • Re. being biased in favour of animal welfare relative to other causes: I feel at least moderately confident that this is not the case. As the person overseeing the team I would be very concerned if I thought this was the case. But it doesn't match my experience of the team being equally happy to work on other cause areas, which is why we spent significant time proposing work across cause areas, and being primarily interested in addressing fundamental questions about how we can best allocate resources.[1] 
  • I am much more sympathetic to the second concern I outlined (which you say is not your concern): we might not be biased in favour of one cause area against another, but we still might lack people on both extremes of all key debates. Both of us seem to agree this is probably inevitable (one reason: EA is heavily skewed towards people who endorse certain positions, as we have argued here, which is a reason to be sceptical of our conclusions and probe the implications of different assumptions).[2] 

Some broader points:

  • I think that it's more productive to focus on evaluating our substantive arguments (to see if they are correct or incorrect) than trying to identify markers of potential latent bias.
  • Our resource allocation work is deliberately framed in terms of open frameworks which allow people to explore the implications of their own assumptions.
     
  1. ^

    And if the members of the team wanted to work solely on animal causes (in a different position), I think they'd all be well-placed to do so. 

  2. ^

    That said, I don't think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).

One possible way of thinking about this, which might tie your work in smaller battles into a 'big picture', is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]

Note that I'm only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).

  1. ^

    As in this passage from one of Yudkowsky's short stories:

    "But time passed," the Confessor said, "time moved forward, and things changed."  The eyes were no longer focused on Akon, looking now at something far away.  "There was an old saying, to the effect that while someone with a single bee sting will pay much for a remedy, to someone with five bee stings, removing just one sting seems less attractive.  That was humanity in the ancient days.  There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere.  And yet... and yet..."

    "There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it.  Fewer wars.  Less starvation.  Better technology.  The economy kept growing.  People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from.  They came even to me, in my time, and rescued me.  Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it.  Humanity finally got its act together."

4 out of 5 of the team members worked publically (googlably) to a greater or lesser extent on animal welfare issues even before joining RP


I think this risks being misleading, because the team have also worked on many non-animal related topics. And it's not surprising that they have, because AW is one of the key cause areas of EA, just as it's not surprising they've worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.

For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.

I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112/124 (90.3%)[1] of the projects I've worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I'm longtermist-biased, even though that constitutes a larger proportion of my work.

Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc). 

If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.

  1. ^

    This is within RP projects, if we included non-RP academic projects, the proportion of animal projects would be even lower.

Load more