DM

David_Moss

Principal Research Director @ Rethink Priorities
7361 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
515

The only statistically significant results are that people who posted or commented on the Forum are more Center-left (41.2% vs 34.9% for non-Forumites), but less Left (27.8% vs 37.8%).

Thanks JWS, it certainly sounds like we agree more than we disagree.

even if it has been established I want to push back and un-establish 

That's definitely fair! 

For what it's worth I think that the explanation for differences in support for these two different clusters of causes is more epistemic and than it is to do with attitudes towards the longterm or near term per se.[1] Ideally, I'd like the terms we use to not (be seen to) refer to the explanation for supporting the causes at all, since I think the reasons are heterogeneous. 

In any case, we definitely agree that none of these terms are perfect, and I suspect no terms are going to be completely satisfactory, but I'm open to continued discussion about what better terms would be.

 

  1. ^

    Although, in terms of predicting "LT minus NT" cause prioritisation from our cause-related idea items, the "long term future" item and "low probability, high impact" items were about equally predictive. 

    Interestingly, this also holds true in unpublished work we have looking at the general public, for whom objections that influencing the far future is impractical or impossible are more consequential than their lack of concern for future generations.

Many thanks Devon!

I agree if you adjusted most of these results by a factor of 3/4x (for the LTF vs GHD/NT ratios above), you'd see GHD/NT ahead pretty much across that board. The biggest ratios in favour of longtermism in the results above are ~2x (though closer to 4x among the highly engaged specifically).

That said, I think the relationship between funding and intra-EA influence is unclear. I would expect large diminishing returns, and for a lot of Meta resources to not be spent on intra-EA influence. My guess is also that a lot of the influence driving people from neartermist to longtermist causes comes from the support of EA elites in a way that is partially separable from funding.[1] So adjusting by the funding ratio would not be straightforward.

  1. ^

    Obviously funding levels and influence are causally related in both directions. That said, I imagine OP/80K/CEA and advocating for cause areas would have significant influence independent of their funding levels.

This is not a bad summary overall, but has some errors/confusions:

Longtermist causes are prioritized by 63.6% of respondents vs. 46.8% for neartermist causes, a gap that widens among highly engaged EAs.

Both parts of this are technically true, but the statistic referred to in the first half of the sentence is different from the one that we reported to show the gap between low/high engagement EAs.

Comparable statistics would be:

  • Overall: 37.6% of respondents most prioritised only a longtermist cause, 21.0% most prioritised only a neartermist cause. 
  • Among the highly engaged, 47% most prioritized only a longtermist cause and only 13% most prioritized a neartermist cause. 
  • Among, the less engaged 26% most prioritized a longtermist cause and  31% most prioritized a neartermist cause.

In an allocation task, respondents assign the most resources to Global health/poverty, then AI risk, then animal welfare

True looking at farmed animal welfare, but a combined animal welfare category would be neck and neck with AI (slightly ahead but not significant).

with actual allocations lower on Global poverty and animal welfare than the survey or an earlier survey of EA leaders.

Actual allocations to Global Poverty are higher than our survey allocations and actual allocations to FAW are lower. I don't have statistics for actual allocations to WAW, but they are likely dramatically lower.

Thanks for the detailed comment!

I have to object to this. I don't think longtermism is best understood as a cause, or set of causes, but more as a justification for working on certain causes over others. e.g.:

Working on Nuclear Risk could be seen as near-termist.

We agree and say as much:

As we have noted previously, people may of course prioritise these causes for reasons other than longtermism or neartermism per se. Likewise, people might support the ‘Other’ causes here for neartermist or longtermist reasons. 

And here's what we say at the linked previous post:

For simplicity, we label support for these, non-longtermist, causes “neartermist”, as we have in previous years. However, it’s worth noting explicitly that there is little reason to suppose that neartermism specifically, (e.g., attitudes or beliefs related to helping present vs. future generations or different time preference) explain support for these causes, rather than different epistemic beliefs (e.g., about appropriate kinds of evidence) or support for more traditional causes etc.

We also explicitly discuss your example of Climate Change making much the same point:

The classifications we used here were informed by our prior analyses of the factor structure of the cause prioritisation items and a priori theoretical considerations (namely Climate Change is clearly associated with the neartermist causes not the longtermist causes, but theoretically some might think of it as a longtermist cause, and we count it as ‘Other’ in this analysis)

I acknowledge though that your concern is that "this categorisation helps to reify and entrench those divisions". I think this is possible, but I think that:

  • There is something[1] unifying these causes
  • It is important to talk about that (I think we'd be more confused/less informed if we just considered all the causes as separate/independent). 
  • Referring to these clusters of causes and ideas in terms of "longtermism" and "neartermism" is established terminology. Crucially, I don't think there's an obviously better  set of term, because "existential risk" wouldn't capture some causes within this bucket (e.g. in previous years we had a "broad longtermism" item which was also part of this cluster)

I think it's important enough not to hide the details in footnotes

I think it's an entirely reasonable view to think discussion of this should be in the text, not a footnote. Though we had a lot of information in both the footnotes and appendix, so it's tricky.

 

  1. ^

    Though I don't claim it's a single thing, rather than a cluster of correlated things. And empirically, our longtermist-neartermist cause scores measure is strongly correlated with people's stated abstract beliefs. The single abstract item explicitly about longtermism, is correlated with LT-NT at r=0.457, which is appreciably strong for a necessarily noisy cause prioritisation score and single item, in a social science context.

Thanks Grayden!

  • I strongly agree that engagement =/= commitment or impact. 
  • That said, I'd the trend for higher engagement to be associated with stronger support for longtermist over neartermist causes is also observed across other proxies for engagement. For example, perhaps most surprisingly, having taken the GWWC pledge is (within our sample) significantly associated with stronger support for LT over NT.

Thanks for the question Huw. This is in reference to the EA cause mental health, rather than mental health for EAs (which would fall under movement building (assuming that the purpose was building the movement, rather than just narrowly helping EA)). This was a requested addition from 2018. 

Thanks!

do you see that any changes over time in cause prioritization could be explained by changing demographics? You mentioned e.g. engagement level predicting cause prioritization. I am then thinking that if the % of low to high engagement EAs have changed over time, perhaps that partially drives the trends in your 4th chart?

Yes, that's definitely right, and one of the reasons why we plotted the interaction between engagement and time in the appendix. 

The percentage of high engagement EAs (using the simple binary measure) has increased over time, from 44% in 2019 to 49% in 2020 and 55% in 2022. So you would expect this to increase support for causes which are shown to be more strongly supported by high engagement EAs across those years. That said, looking at the interaction plots, you can also see that support for Biosecurity and Nuclear and AI has increased among both high and low engagement EAs over that time period,

It seems possible that both of these are neglected for similar reasons. 

It seems surprising the funding would be the bottleneck (which means you can't just have more of both). But that has been my experience surprisingly often, i.e. core orgs are willing to devote many highly valuable staff hours to collaborating on survey projects, but balk at ~$10,000 survey costs.

Thanks again for the detailed reply Cameron!

It seems like you find the descriptor 'lukewarm' to be specifically problematic—I am considering changing the word choice of the 'headline result' accordingly given this exchange. (I originally chose to use the word 'lukewarm' to reflect the normal-but-slightly-negative skew of the results I've highlighted previously. I probably would have used 'divided' if our results looked bimodal, but they do not.) 

I don't think our disagreement is to do with the word "lukewarm". I'd be happy for the word "lukewarm" to be replaced with "normal but slightly negative skew" or "roughly neutral, but slightly negative" in our disagreement. I'll explain where I think the disagreement is below.

Here's the core statement which I disagreed with:


EAs have lukewarm [normal but slightly-negative skew] views about longtermism

  1. ResultEAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare

The first point of disagreement concerned this claim: 

  • "EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare"

If we take "promising" to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice 'active involvement', we'll find more actively involved EAs prioritise X-risk more.

As we discussed above, it's possible that "promising" means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]

 

EAs have lukewarm [normal but slightly-negative skew] views about longtermism

Concerning this claim, I think we need to distinguish (as I did above), between: 

  • What do people think of 'longtermism'? / What do people think about allocations to  or prioritisation of longtermist causes? 
  • What do people think of EA's shift more towards longtermist causes?

Regarding the first of these questions, your second result shows slight disagreement with the claim "I think longtermist causes should be the primary focus in effective altruism". I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I'm more sceptical. I wonder if what explains the difference is people's response to the notion of these causes being the "primary focus", rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism. 

In any case, this question largely seems to me to reduce to the question of what people's actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.

Regarding the question of EA's attitudes towards the "overall shift towards longtermist causes", I would also say that, taken in isolation, it's reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA's shift towards longtermism. Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I'll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the "overall shift towards longtermist causes" may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the "overall shift", despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs' attitudes towards this "shift" but not regarding their overall attitudes towards longtermism and longtermist causes.

Thanks again for your work producing these results and responding to these comments! 

 

  1. ^

    As I noted, I could imagine "promising" connoting something like new, young, scrappy cause areas (such that an area could be more "promising" even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it's not really a new cause area), but it's hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.

  2. ^

     For example, people might think EA should not have a "primary focus", but remain a 'cause-neutral' movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.

  3. ^

     

Load more