I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
I don't think that Caplan's test is a good one, for a couple of reasons that commenters on his original post pointed out:
Thanks Ivan!
What stood out most was the idea that prioritization functions not only as analysis but also as a form of governance
I agree with this. And I think this framing makes clear why how we allocate the community's prioritisation is such an important question.
How much of the prioritization in EA is a design question about institutional learning, decision rights, and oversight?
From that angle, the current emphasis on within-cause work doesn’t just feel like a strategic imbalance; it may also reflect what’s easier to operationalize within existing organizational structures.
I also agree with this. As we allude to in the piece, institutional infrastructure for (and generally doing) within-cause prioritisation is generally easier: you can build on, or more easily develop, networks of domain-specialists and specialist institutions. And I think various factors push the community towards more siloed within-cause structures (e.g. network effects etc.).
So I think it's both the case that within-cause infrastructure is easier to set up and that, as you say, our current (heavily cause-specific) infrastructure makes within-cause prioritisation easier and cross-cause prioritisation harder (e.g. there are few institutions that are well-placed or have the remit to do cross-cause work).
I agree that we would need more structured systems (or more support for the existing systems) in order to do more cross-cause prioritisation. I don't want to communicate fatalism about this though: I think existing organizations and individuals could start doing significantly more cross-cause prioritisation if they decided it were valuable and that this would itself make it easier to build the relevant infrastructure.[1]
Though, of course, it would take further work for EA's actual allocations of resources to be influenced by this prioritisation work.
Thank you for the reply Nick.
I agree there's the ethical commensurability factor, but I'm not talking about moral intuintions, or domain specific knowledge, but about concrete reality - even if that reality is unknown. I'm saying that cross-cause comparison increases objective (not subjective) uncertainty by orders of magnitude.
I think my remarks above apply across different kinds of uncertainty (we discuss ethical, empirical and other kinds of uncertainty above). That said, I'm not sure I follow your intended point about objective uncertainty (your example given seems to be about subjective uncertainty about moral weight), but it seems to me my remarks would apply exactly the same to objective uncertainty.
To put it another way, in many cases moral weight doesn't matter for within cause comparison, but it becomes critically imporant between causes... this huge objective increase in uncertainty here is an important if fairly basic point to recognise.
We make the point (using the same examples) that comparisons across causes introduce many huge uncertainties, that do not apply within-causes, at multiple points in the passages quoted above and elsewhere. So I fear we may be talking past each other if you see this point as missing from the article.
Thanks for the comment Nick.
The article discusses these difficulties, including the same specific example of cross-species comparisons here:
With that broad a mandate, however, come significant challenges. One of the biggest issues is ethical commensurability – essentially, how do you compare ‘good done’ across wildly different spheres? Each cause tends to have its own metrics and moral values, and these don’t easily line up. Saving a child’s life can be measured in DALYs or QALYs, but how do we directly compare that to reducing the probability of human extinction, or to sparing chickens from factory farms? Cross-cause analysis must somehow weigh very different outcomes against each other, forcing thorny value judgments. One concrete example is comparing global health vs. existential risk. Global health interventions are often evaluated by cost per DALY or life saved, whereas existential risk reduction is about lowering a tiny probability of a huge future catastrophe. A cross-cause perspective has to decide how many present-day lives saved is “equivalent” to a 0.01% reduction in extinction risk – a deeply fraught question. Likewise, comparing human-centric causes to animal-focused causes requires assumptions about the relative moral weight of animal suffering vs. human suffering. If there’s no agreed-upon exchange rate (and people’s intuitions differ), the comparisons can feel too disparate. Researchers have attempted to resolve this by creating unified metrics or moral weight estimates (for instance, projects to estimate how many shrimp-life improvements rival a human-life improvement), but there’s often no escaping the subjective choices involved. This means cross-cause prioritization can be especially contentious and uncertain: small changes in moral assumptions or estimates can flip the ranking of causes, leading to debate.
...
Aggregating evidence across causes is very hard – the data and methodology you use to assess a poverty program vs. an AI research project are entirely different. Having worked on broad cross-domain analyses of this kind, we have previously noted how difficult it is to incorporate “the vast number of relevant considerations and the full breadth of our uncertainties within a single model” when comparing across domains
We also discuss this in the context of cause prioritisation here. I think it's important to note that these difficulties apply to any comparison across causes (not just intervention-level cross-cause prioritisation), and so can't be dodged if you are interested in cause-neutrally identifying the best interventions.
However, in other ways, comparing the value of different causes can be especially challenging. Researchers must consider ethical trade-offs, uncertainty, and the potential for model errors. At its best, this means that cause prioritization can lead to the beneficial development of frameworks, metrics, and criteria that improve prioritization methods overall. At its worst, and sometimes more commonly, it just leads to lots of intuition-jousting between vague qualitative heuristics.
That said, I would encourage people to reflect carefully about their attitudes towards uncertainty before concluding that:
If we are uncertainty averse, then cross cause prioritisation becomes much less attractive.
Whether this makes sense will depend on your specific attitudes towards uncertainty, and the specific circumstances of the case.
Specific kinds of uncertainty aversion might lead a person to favour focusing their resource allocation only on interventions where they are highly certain about the effect of the intervention. If these interventions are concentrated within a single cause, this might lead them to focus their prioritisation within that cause. Or, they might focus their prioritisation within a given cause because this will allow them to maximise their certainty about outcomes (due to domain-specific knowledge, as we discuss elsewhere).
But it's not clear that such a person should focus their prioritisation within a single cause. It may be that the interventions about which they can be most certain are not concentrated within one cause, but rather spread across different causes. If so, their search for highly certain interventions should potentially spread across causes.
It's also worth noting explicitly the difference between uncertainty about interventions and uncertainty about comparisons between interventions. Our observations above show that the comparison of interventions in different causes may often be particularly uncertain (e.g. being uncertain about the relative weight of human and chicken suffering). But it seems very unclear what normatively follows from this. Note that if you are uncertain about how to compare A and B, just deciding to focus your efforts on one doesn't reduce your uncertainty about the comparison at all. And deciding to focus your prioritisation effort on one just doubles-down on your ignorance, by electing to not conduct the prioritisation research that would resolve your uncertainty.
In addition, as we note elsewhere, concern about uncertainty could also push towards diversification, which likely recommends prioritisation across cause areas in order to identify less correlated interventions:
Concern over avoiding wasted efforts calls for diversifying resources across multiple causes to reduce the risk of correlated outcomes from overfocusing on one area. For instance, an organization might allocate funding across global health, AI, and biosecurity projects to ensure that a setback in one field does not derail all progress. Intervention and cause diversification, made possible through a blend of cause-level and cross-cause prioritization work, builds resilience and increases the probability of achieving impact.
We're also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.
Just to note that the even more recent EA Survey is here, and 80K are indeed still the single source most commonly cited as important for people getting involved!
Thanks James!
We previously asked this question in 2019. The modal response was 0 such connections, followed by 1-2, though there was a long tail of >10 connection respondents. Connections were also far higher for highly engaged EAs (60.7% of highly engaged EAs had >10 connections, compared to 13.7% for considerably (4/5) engaged EAs, and <2% for anyone less engaged).
We cut the question due to lack of space and it not being prioritized by core orgs making requests of us. But we'd be happy to reintroduce if there is sufficient interest.
Agreed. I think this is another important confound.
People's concern for relative status may seems clearer when we consider cases of 'moving up' into an area of people who are relatively wealthier, i.e. even if the environment were materially much nicer, I think most people will find it very salient if they are the only non-wealthy person there.