I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
We didn't directly examine why worry is increasing, across these surveys. I agree that would be an interesting thing to examine in additional work.
That said, when we asked people why they agreed or disagree with the CAIS statement, people who agreed mentioned a variety of factors including "tech experts" expressing concerns and the fact that they had seen Terminator etc., and directly observing characteristics of AI (e.g. that it seemed to be learning faster than we would be able to handle). In the CAIS statement writeup, we only examined the reasons why people disagreed (the responses tended to be more homogeneous, because many people were just saying ~ it's a serious threat), but we could potentially do further analysis of why they agreed. We'd also be interested to explore this in future work.
It's also perhaps worth noting that we originally wanted to run Pulse monthly, which would allow us to track changes in response to specific events (e.g. the releases of new LLM versions). Now we're running it quarterly (due to changes in the funding situation), that will be less feasible.
Addressing only the results reported in this post, rather than the survey as a whole:
I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).
We agree these would be valuable surveys to conduct (and we'd be happy to conduct them if someone wants to fund us to do so). But they'd be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.
Also why didn't you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?
("Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware)."
This comparison wouldn't strictly make sense for a few reasons:
I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.
I believe all of that is true, but at the same time, I’m almost certain we’ve lost significant credibility with key stakeholders... Friendly organisations have explicitly stated they do not want to publicly associate with us due to our EA branding, as the EA brand has become a major drawback among their key stakeholders
I definitely agree this is true, just not sufficient in itself to mean that movement building for EA is impossible or less viable than promoting other ideas (for that we'd need to assess alternative brands/framings).
Agree that this is likely explained by people thinking they recognise the familiar terms and conflating it with the Humane Society or other local Humane Societies. We didn't include specific checks of real awareness for The Humane League or other orgs and figure on our list, because they weren't key outcomes we were interested in verifying awareness of per se and survey length is limited. They were included primarily to provide a point of comparison (alongside a mixture of fake, real but very low incidence, and real and very common, items), and to allow us another check by assessing whether responses were associated with each other in ways that made sense (i.e. we would expect EA-related terms to show sensible associations with each other, charities in general to be associated with each other, and tech-related items to be associated with each other).
Based on google trends, I'd expect The Humane League to be a bit less well known than GiveWell, and the Humane Society to be much more well known.
Great talk, thanks!
The thing is, broad awareness of EA is still really low—around 2%. This is from research that was done last summer between Rethink Priorities and CEA, and Breakwater. They found even though in specific groups that we care about, like some elite circles, it might be higher on the whole awareness of EA, it’s just still very low.
Agreed with this.
That said, I'd also add that sentiment is still positive even among those who have heard of EA.
Our research on elite university students (unpublished but referenced by CEA here), also found that among those who were familiar with EA, only a small number mentioned FTX.
I was indeed trying to say option a - that There's a "bias towards animals relative to other cause areas," . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that's often impractical and not my point here.
Thanks for clarifying!
Some broader points:
And if the members of the team wanted to work solely on animal causes (in a different position), I think they'd all be well-placed to do so.
That said, I don't think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).
One possible way of thinking about this, which might tie your work in smaller battles into a 'big picture', is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]
Note that I'm only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).
As in this passage from one of Yudkowsky's short stories:
"But time passed," the Confessor said, "time moved forward, and things changed." The eyes were no longer focused on Akon, looking now at something far away. "There was an old saying, to the effect that while someone with a single bee sting will pay much for a remedy, to someone with five bee stings, removing just one sting seems less attractive. That was humanity in the ancient days. There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere. And yet... and yet..."
"There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together."
4 out of 5 of the team members worked publically (googlably) to a greater or lesser extent on animal welfare issues even before joining RP
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it's not surprising that they have, because AW is one of the key cause areas of EA, just as it's not surprising they've worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112/124 (90.3%)[1] of the projects I've worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I'm longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
This is within RP projects, if we included non-RP academic projects, the proportion of animal projects would be even lower.
I think these questions are relevant in a variety of ways:
One move which is sometimes made to suggest that these things aren't relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn't suggest that broader public attitudes are not important.
As a practical matter, it's also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.