Hide table of contents

The cause prioritization landscape in EA is changing.

  • Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas.
  • Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations.
  • While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention.

We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question:

Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?

81

0
0
3

Reactions

0
0
3
New Answer
New Comment

17 Answers sorted by

Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:

"How likely is it that a world where AI goes well for humans also goes well for other sentient beings?"

It could probably be much more precise and nuanced, but specifically, I would want to assess whether "trying to make AI go well for all sentient beings" is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures - the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me. 

I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about - assuming that explicit marginal cost-effectiveness estimates aren't really possible, this seems like the most common proxy I refer to that I am missing solid numbers on. 

I would like to see research on how to increase the welfare of soil animals and microorganisms, including on how to build capacity to do this. I think whether interventions increase or decrease welfare in expectation is determined by very uncertain dominant effects on such potential beings. I do not even know whether electrically stunning shrimp increases or decreases welfare, even if I was certain it increased the welfare of shrimps conditional on these being sentient.

As a panpsychist and suffering abolitionist, I'm one of the most sympathetic people in the world to the cause of reducing suffering even in the smallest beings. And yet, I do not want to see more research on how to increase the welfare of microorganisms on the margin (or at least not with EA resources).

I probably won't change your mind about meta ethics, but I strongly disagree with the aggregationist QALY approach to comparing the welfare of humans vs e.g. soil animals (e.g. here). I hope to write more about this at some point, but as an intuition pump, I think there's a good chance that the problem of reducing soil animal or microorganism suffering is somewhat analogous to the problem of reducing, say, pin pricks in humans. I would not support EA efforts to reduce the number of pin pricks in humans, no matter how vast, given that we also have humans who are actually being tortured right now.

As much as I care about insects and other small organisms, I'm really sad that the EA community invests far more resources into their well-being than to reducing torture in humans (e.g. there are 120 EA Forum posts on Invertebrate Welfare and only 7 on cluster headache; and there isn't even an... (read more)

7
Vasco Grilo🔸
Thanks for sharing your thoughts, Alfredo! Would you avert 2 h of pain of intensity 0.999 instead of 1 h of pain of intensity 1? If so, would you avert 4 h (= 2*2) of pain of intensity 0.998 (= 0.999^2) instead of 2 h of pain of intensity 0.999? If so, why not generalise, and conclude you would avert 2^N h of pain of intensity 0.999^N instead of 1 h of pain of intensity 1? You could endorse this, and still value averting pain more than proportionally to its intensity. The expected pain averted by picking the 1st option would be 2^N*0.999^N/(1*1) = 1.998^N times the expected pain averted by picking the 2nd option. However, averting sufficiently many hours of pain of a very low intensity would still be better than averting 1 h of pain of a very high intensity. I think less funding is a better proxy for higher cost-effectiveness than fewer EA Forum posts. Do you know how much funding is spent globally per year on preventing human torture? I am not aware of any project studying the welfare of soil springtails, mites, and nematodes, which are the most abundant soil animals. There are no results for “springtail”, “mite ”, and “nematode” on Wild Animal Initiative's (WAI's) grantees page.
8
Alfredo Parra 🔸
Thanks for your answer! :) I think the procedure might not be generalizable, for the following reason. I currently think that a moment of conscious experience corresponds to a specific configuration of the electromagnetic field. As such, it can undergo phase transitions, analogous to how water goes abruptly from liquid to gas at 100°C. Using the 1-dimensional quantity "temperature" can be useful in some contexts but is insufficient in others. Steam is not simply "liquid water but a bit warmer"; steam has very different properties altogether. To extend this (very imperfect) analogy, imagine we lived in a world where steam killed people but (liquid) water didn't (because of properties specific to steam, like being inhalable or something). In this case, the claim "reducing sufficiently many units of lukewarm water would still be better than reducing a unit of steam" would miss the point by the lights of someone who cares about death. (Here are some thoughts on phase transitions in certain altered states of consciousness.) I don't! That's the sort of question I'd like to see more research on (or discussed more on the Forum if such research already exists), as well as which torture-prevention orgs/programs are most cost-effective, etc.
2
Vasco Grilo🔸
Thanks for clarifying. I agree pains of different intensities have different properties. My understanding is that the Welfare Footprint Institute (WFI) relies on this to some extent to define their 4 pain categories. However, I do not understand how that undermines my point. Water and water vapor have different properties, but we can still compare their temperature. Liwekise, I think we can compare the intensity of different pain experiences even if they have different properties. I seem to agree. Assuming water had a potential to kill people of exactly 0, and steam had a potential to kill people above 0, no amount of water would have the potential to kill as many people as some amount of steam. However, I do not think this undermines my point. When I say that "averting sufficiently many hours of pain of a very low intensity would still be better than averting 1 h of pain of a very high intensity", the very low intensity still has to be higher than an intensity of exactly 0.

Approx how much absorbency/room for more funding is there in each cause area? How many good additional opportunities are there over what is currently being funded? How steep are the diminishing returns for an additional $10m, $50m, $100m, $500m?

EA is pivoting hard into policy development, political organising, and comms work, driven from the AI side of things.

To what extent is it possible to leverage the resulting political-ability capital EA develops towards lobbying governments for the restoration of effective international aid commitments?

(the ability or lack thereof for dual-use of political infrastructure across EA causes affects the extent to which I would personally whack $20 million at political infrastructure development for EA or not, being as I am not convinced by AI safety as the far and away top cause area.)

This is a very pertinent question. I will be interested to see in the next several years how this capital is leveraged - I would recommend checking out @Open Philanthropy 's current strategy on Global Aid Policy. They are already doing some great work in this area.

This might not fit the idea of a prioritization question, but it seems like there are a lot of "sure bets" in global development, where you can feel highly confident an intervention will be useful, and not that many in AI-related causes (high chance it either ends up doing nothing or being harmful), with animal welfare somewhere in between. It would be interesting to find projects in global development that look good for risk-tolerant donors, and ones in AI (and maybe animal welfare or other "longtermist" causes) that look good for less risk-tolerant donors. 

A lot of what @Open Philanthropy do in GHD is pretty high risk stuff. Risk tolerant GHD donors could do worse than look at what they are funding and maybe get on the bandwagon.

Some questions that might be cruxy and important for money allocation: 

Because there is some evidence that superforecaster aggregation might underperform in AI capabilities, how should epistemic weight be distributed between generalist forecasters, domain experts, and algorithmic prediction models? What evidence exists/can be gotten about their relative track records?

Are there better ways to do AIS CEA? What are they? 

Is there productive work to be done in inter-cause comparison among new potential cause areas (i.e. digital minds, space governance, etc)? What types of assumptions do these rely on? I ask because it seems like people typically go into these fields because "woah, those numbers are really big," but that sort of reasoning applies to lots of those fields and doesn't tell you very much about resource distribution. 

What are the reputational effects for EA (for people inside and outside the movement) going (more) all in on certain causes and then being wrong (i.e. AI is and continues to be a bubble)? Should this change how much EA should go in on things? Under what assumptions? 

Some questions which feel alive for me:

  1. Should we expect risks to come from categories and places we expect to see coming, or from places we would not have anticipated afterwards? What's the proportion of black swans for the largest risks?
  2. How do we incorporate a term for doing good in a way that helps us do more good in the future? Companies can sell stock, whereas nonprofits can't; the more Elon Musk does the more money he makes, even while pursuing some notion of the good, whereas the more a philanthropist gives away the less he has. This seems like a strategic disadvantage. This is more like an operational decision than like a research question though.

I'd like to see more rigorous engagement with big questions like where value comes from, what makes a good future, how we know, and how this affects cause prioritization. I think it's generally assumed "consciousness is where value comes from, so maximize it in some way." Yet some of the people who have studied consciousness most closely from a phenomenological perspective seem to not think that (e.g. zen masters, Tibetan lamas, other contemplatives, etc), let alone scale it to cosmic levels. Why? Is third person philosophical analysis alone missing something? 

The experiences of these people add up to millions of years of contemplation across thousands of years. If we accept this as a sort of "long reflection" what does that mean? If we don't, what do we envision differently and why? And are we really going to be able to do serious sustained reflection if/once we have everything we think we want within our grasp due to strong AI?

These are the kinds of things I'm currently thinking through most in my spare time and writing my thoughts up on. 

If I were committed to allocating $20M starting in 2026, the key uncertainties I would want resolved before deciding how to give fall into two clusters: questions about the near term tractability of global health interventions and questions about the risk landscape surrounding increasingly capable AI systems. What I find most neglected is research on how these domains interact.

1. How do we compare the marginal value of global health scaling vs. AI safety risk mitigation, given uncertainty around timelines and tractability?
Much of the Global Health and Development community has built strong methodologies for cost effectiveness modeling and iteration. AI safety, by contrast, remains dominated by high uncertainty and expert priors. I would want to better understand whether some AI governance or alignment work can be made more measurable such that it can be compared, even imperfectly, to global health opportunities like malaria control, vaccine delivery, or unconditional cash transfers. This feels especially relevant given @Open Philanthropy 's portfolio across both spaces.

2. What are the highest leverage AI related opportunities for improving global health outcomes?
Even before AGI, frontier systems will shape how health systems operate. We are already seeing early benefits (diagnostics, forecasting, logistics) and risks (deepfake misinformation, automated biodesign, widening inequalities). I would want rigorous analysis of whether supporting AI policy capacity in LMICs, or designing AI systems to specifically serve low resource contexts, could outperform traditional global health spending on a per dollar basis. @Evidence Action style evidence based delivery may become relevant here.

3. What is the risk profile of AI deployed in global health, and does it create new systemic vulnerabilities?
For example, how does reliance on AI created surveillance or epidemiological systems change the risk of catastrophic misuse or failure? Could global health deployments inadvertently increase existential bio risk? This feels like a question currently falling between cause areas and I am not sure any actor is systematically prioritizing it.

4. What are the optimal philanthropic strategies if AI timelines shorten significantly?
Should a donor pivot from long term global health institution building to technical alignment research or policy advocacy? Or is it more valuable to improve the resilience and welfare of populations who will otherwise be least protected from AI driven shocks? This is a core strategic question for anyone trying to maximize expected value across time.

If I had to name the most important meta question: what frameworks allow us to compare uncertain, systemic, long tail risk reduction (e.g. alignment, global governance) with concrete, short timeline health and development interventions without resorting to hand waving or relying purely on moral intuitions?

I think that expanding cross cause prioritization frameworks to include AI safety explicitly, and especially to explore AI global health interactions, is a major gap in current EA work. If I had $20M and time to wait, the research agenda I’d want to commission would sit precisely at that intersection.

What work that may reduce animal suffering looks effective (or positive at all) when we don't ignore or downplay the backfire risks via unintended indirect effects? See What to do about near-term cluelessness in animal welfare and If wild animal welfare is intractable, everything is intractable.

"What does 'negative lives' really look like?"

Given its gravity, I find this question underexplored and current tools (QALYs, WELLBYs, YLSSs etc.) either underdeveloped or inadequate. At the same time I think answers should fit into existing frameworks when possible, since these tools often form a cornerstone, or are heavily implied, when doing cause prioritisation.

There are at least two things I would like to know more about:

  1. When does the transition from positive to negative lives happen? What characterizes this shift? Some of the above tools have a defined neutral point, but in practice, evaluations around it are somewhat distorted.

  2. How far in the negative is it possible to be compared to the positive?

I think it should be possible to at least get some empirical anchors tied to commonly used frameworks, preferably more than one.

(And I have some ideas for how to do so, but it doesn't seem like there are many funding opportunities, so I'd probably want $500 000 for myself to pursue them, or maybe just $50 000 to do some small tests)

This is interesting to me as well. It seems like more of a philosophical question to me, but I have not given it though consideration to say. If you don't mind sharing, how would empirical anchors inform this?

This seems a bit related to the “Pivotal questions”: an Unjournal trial initiative   -- we've engaged with a small group of organizations and elicited some of these -- see here.

To highlight some that seem potentially relevant to your ask:

What are the effects of increasing the availability of animal-free foods on animal product consumption? Are alternatives to animal products actually used to replace animal products, and especially those that involve the most suffering? Which plant-based offerings are being used as substitutes versus complements for animal products and why?

Wellbeing measures/how to convert between DALY and WELLBY welfare measurements on assessing charities and interventions.

Is WELLBY the most appropriate (useful, reliable...) measure [for interventions that may have impacts on mental health]

What is cell-cultured meat likely to cost, by year, as a function of the level of investments made?

How often do countries honor their (international) agreements in the event of large catastrophes (and what determines this?)

How probable is it that cell-cultured meat will gain widespread consumer acceptance, and to what timescale? To what extent will consumers replace conventional meat with cell-cultured meat?

How important is democracy for resilience against global catastrophic risk?

How generalizable is evidence on the effectiveness of corporate animal welfare outreach [in the North] to the Global South?

How much will the US government use subjective forecasting approaches (in the way the DoD does) in the next ~50 years?

How politicized will AI get in the next (1,2,5) and what will those trends look like?

I think we are investing as a community more in policy/advocacy/research but the value of these things might be heavily a function of the politicization/toxicity of AI. Not a strong prior but I'd assume that like OMB/Think tanks get to write a really large % of the policy for boring non electorally dominant issues but have much less hard power when the issue at hand is like (healthcare, crime, immigration). 

 

Here are some cruxes that I don't see addressed as much:

If there is a future with sentient beings living in it, are their lives, on average, likely to be net positive or net negative?

This weighs over all existential risk cause areas. If preventing existential risk is possible, but the future is net-negative, then this intervention may be harmful, due to increasing s-risks.

How much will the governance of developed countries influence the governance of underdeveloped countries?

A country that does not value net welfare at all but instead dominance and power can transfer its values to the rest of the world through popular media and economic dependency. If this happens, it could lead to a future where coordination on issues becomes difficult, if not impossible.

I would like to know more about the tractability of economic growth research and interventions in low-income nations. It seems like it has the potential to be much more effective than traditional global health interventions, but there's a lot of uncertainty surrounding it. 

I've done very little research into this, but I would also like to know if economic growth reduces the risk of large-scale anthropogenic violence. Perhaps people living rich and happy lives are much less likely to do things that increase the risk of various global catastrophes occurring. Perhaps the opposite is true. 

Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:

  • Operationalisations of AI safety that don't exacerbate geopolitical tensions with China - or ideally that actively seek ways to collaborate with China on reducing the major risks.
  • Ways to materially incentivise good work and disincentivise bad work within nonprofit organisations, especially effectiveness-minded organisations
  • Looking for ways to do data-driven analyses on political work especially advocacy; correct me if wrong, but the recommendations in EA space for political advocacy seem to necessarily boil down to a lot of gut-instincting on whether someone having successfully executed Project A makes their work on Project B have high expectation
  • Research into the difficulty of becoming a successful civilisation after recovery from civilisational collapse (I wrote more about this here)
  • How much scope is there for more work or more funding in the nuclear safety space, and what is its current state? Last I heard, it had lost a bunch of funding, such that highly skilled/experienced diplomats in the space were having to find unrelated jobs. Is that still true? 

1. Ultimate Insurance or Immediate Band-Aid?

​Do we buy the ultimate fire insurance to protect the whole future, or do we buy immediate bandages for today's wounds?

2. Lawyers or Leverage?

​Are the lawyers forcing us to spend this money today, or can we let it grow bigger for maximum impact later?

3. Which Charity is Broke?

​Out of all the good charities, which one is actually running on empty, and which ones already have enough cash?

Footnote:

Urgent Safety Funding for Interactive Tech: Future inseparable (non-detachable, e.g., smart-home integration, always-on biometrics) and portable (detachable) interactive devices pose a profound, new risk of subliminal coercion and mass control.

Dedicated funding is immediately required to establish independent, international safety regulation, specifically to test against hidden, AI-driven manipulative techniques (subliminal influencing) to safeguard the autonomy of the masses against exploitation by powerful AI systems or unethical organizations. 

This is the ultimate fire insurance for cognitive freedom.

Comments4
Sorted by Click to highlight new comments since:

This may be less fun but, for completeness, I want to present an alternative perspective -- I think I know exactly how I'd spend it and don't have any particular questions. Feel free to send the $20M over whenever works best.

Call off the Long Reflection

completely agree just give @Peter Wildeford they money right now. Every minute we delay is lost expected value...

Thanks for posting this! I will be coming back to this post next year when I'm planning debate weeks. 
 

Curated and popular this week
Relevant opportunities