Hide table of contents

The cause prioritization landscape in EA is changing.

  • Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas.
  • Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations.
  • While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention.

We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question:

Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?

49

0
0
2

Reactions

0
0
2
New Answer
New Comment

4 Answers sorted by

Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:

"How likely is it that a world where AI goes well for humans also goes well for other sentient beings?"

It could probably be much more precise and nuanced, but specifically, I would want to assess whether "trying to make AI go well for all sentient beings" is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures - the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me. 

I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about - assuming that explicit marginal cost-effectiveness estimates aren't really possible, this seems like the most common proxy I refer to that I am missing solid numbers on. 

I'd like to see more rigorous engagement with big questions like where value comes from, what makes a good future, how we know, and how this affects cause prioritization. I think it's generally assumed "consciousness is where value comes from, so maximize it in some way." Yet some of the people who have studied consciousness most closely from a phenomenological perspective seem to not think that (e.g. zen masters, Tibetan lamas, other contemplatives, etc), let alone scale it to cosmic levels. Why? Is third person philosophical analysis alone missing something? 

The experiences of these people add up to millions of years of contemplation across thousands of years. If we accept this as a sort of "long reflection" what does that mean? If we don't, what do we envision differently and why? And are we really going to be able to do serious sustained reflection if/once we have everything we think we want within our grasp due to strong AI?

These are the kinds of things I'm currently thinking through most in my spare time and writing my thoughts up on. 

Here are some cruxes that I don't see addressed as much:

If there is a future with sentient beings living in it, are their lives, on average, likely to be net positive or net negative?

This weighs over all existential risk cause areas. If preventing existential risk is possible, but the future is net-negative, then this intervention may be harmful, due to increasing s-risks.

How much will the governance of developed countries influence the governance of underdeveloped countries?

A country that does not value net welfare at all but instead dominance and power can transfer its values to the rest of the world through popular media and economic dependency. If this happens, it could lead to a future where coordination on issues becomes difficult, if not impossible.

How politicized will AI get in the next (1,2,5) and what will those trends look like?

I think we are investing as a community more in policy/advocacy/research but the value of these things might be heavily a function of the politicization/toxicity of AI. Not a strong prior but I'd assume that like OMB/Think tanks get to write a really large % of the policy for boring non electorally dominant issues but have much less hard power when the issue at hand is like (healthcare, crime, immigration). 

 

Comments1
Sorted by Click to highlight new comments since:

This may be less fun but, for completeness, I want to present an alternative perspective -- I think I know exactly how I'd spend it and don't have any particular questions. Feel free to send the $20M over whenever works best.

Curated and popular this week
Relevant opportunities