I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
@titotal I'm curious whether or to what extent we substantively disagree, so I'd be interested in what specific numbers you'd anticipate, if you'd be interested in sharing.
I don't say strictly 0% only because I think there's always the possibility for a few unusual cases, e.g. someone is googling how to do good and happens across an old post about EAG or their inactive local group.
I'm imagining someone googling "ethical career" 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now.
I definitely agree that would eventually become the case (eventually all the older non-AI articles will become out of date). I'm less sure it will be a big factor 2 years from now (though it depends on exactly how articles are arranged on the website and so how salient it is that the non-AI articles are old).
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over.
I also think this is true in general (I don't have a strong view about the net balance in the case of 80K's outreach specifically).
Previous analyses we conducted suggested that over half of Longtermists (~60%) previously prioritised a different cause and that this is consistent across time.
You can see the overall self-reported flows (in 2019) here.
Thanks Arden!
I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years.
That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via 'passive' outreach (e.g. someone googles "ethical career" and finds the 80K website', and for 80K specifically about 50% of recruitment was 'passive'), rather than active outreach, and it seems plausible that much of that could continue even after 80K's strategic shift.
Our framings will probably change. It's possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we're actively unsure of.
As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.
A broader coalition of actors will be motivated to pursue extinction prevention than longtermist trajectory changes... For instance, see Scott Alexander on the benefits of extinction risk as a popular meme compared to longtermism.
This might vary between:
Though our initial work does not suggest this.
I agree this is a potential concern.
As it happens, since 2020, the community has continued to age. As of the end of last year, it's median 31, mean 22.4, and we can see that it has steadily aged across years.
It's clear that a key contributor to our age distribution is the age at which people first get involved with EA, which is median 24, mean 26.7, but the age at which people first get involved has also increased over time.
I think people sometimes point to our outreach focusing on things like university groups to explain this pattern. But I think this is likely over-stated, as this accounts for only a small minority of our recruiting, and most of the ways people first hear about EA seems to be more passive mechanisms, not tied to direct outreach, which would be accessible to people at older ages (we'll discuss this in more detail in the 2024 iteration of this post).
That said, different age ranges do appear to have different levels of awareness of EA, with highest awareness seeming to be at the 25-34 or 35-44 age ranges. (Though our sample size is large, the number of people who we count as aware of EA are very low, so you can see these estimates are quite uncertain. Our confidence in these estimates will increase as we run more surveys). This suggests that awareness of EA may be reaching different groups unevenly, which could partly contribute to lower engagement from older age groups. But this need not be the result of differences in our outreach. It could result from different levels of interest from the different groups.
Matthew Yglesias has written more critically about this tendency (which he thinks is widely followed in activist circles, but is often detrimental). For example, here he describes what he refers to as "activist chum", which is good for motivating and fundraising (very important for the self-interest of (those leading) movement), but can lead to focusing on "wins" that aren't meaningful and may be unhelpful.
The chum comes from the following political organizing playbook that is widely followed in progressive circles:
- Always be asking for something.
- Ask for something of someone empowered to give it to you.
- Ask for something from someone who cares what you think.
That's interesting, but seems to be addressing a somewhat separate claim to mine.
My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can't directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions.
If I understand your claims correctly they are that:
I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don't have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don't consider that an application of the Neglectedness/Crowdedness heuristic.
I think this depends crucially on how, and to what object, you are applying the ITN framework:
On the whole, it seems to me that the further you move aware from abstract evaluations of broad cause areas, and more towards concrete interventions, the less it's necessary or appropriate to depend on broad heuristics and the more you can simply attempt to estimate expected impact directly.
I definitely agree that this seems unusual (and our results show that it is!). Still, it seems reasonable that a very small number of people might stumble across an EAGx first, as in Arden's anecdote below.