Hide table of contents

As a community, EA sometimes talks about finding "Cause X" (example 1, example 2).

The search for "Cause X" featured prominently in the billing for last year's EA Global (a).

I understand "Cause X" to mean "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar."

This afternoon, I realized I don't really know how many people in EA are actively pursuing the "search for cause X." (I thought of a couple people, who I'll note in comments to this thread. But my map feels very incomplete.)

19

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

In my understanding "Cause X" is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar".

I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of "recognition"a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:

  • Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
  • Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a "natural part of life". Right consideration but it doesn't imply its moral value or how many resources we should devote to the problem)
  • "Resurrection" or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
  • Something related to subjective experience? (stuff related to subjective experience that people don't deem worthy to assign moral value to because "times are still too rough to notice them", or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).

Cause areas that I think don't fit the definition above:

  • Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
  • X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.

But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I'm not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.

I believe you could argue that every organisation devoted to finding new potential cause areas is.

What organizations do you have in mind?

5
Emanuele_Ascani
Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn't mean "devoted exclusively to finding new potential cause areas".

Thanks!

Very curious why this was downvoted. (This idea has been floated before, e.g. on the 80,000 Hours podcast, and seems like a plausible Cause X.)

I think working on preventing collapse of civilization given loss of electricity/industry due to extreme solar storm, high altitude electromagnetic pulses and narrow AI computer virus is a cause X (disclaimer, co-founder of ALLFED).

This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an "Effective Thesis Prize", to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.

Rethink Priorities seems to be the obvious organization focused on this.

From their website:

Right now, our research agenda is primarily focused on:
prioritization and research work within interventions aimed at nonhuman animals (as research progress here looks uniquely tractable compared to other cause areas)
understanding EA movement growth by running the EA Survey and assisting LEAN and SHIC in gathering evidence about EA movement building (as research here looks tractable and neglected)

Sounds like they're currently focused on new animal welfare & community-building interventions, rather than finding an entirely differ... (read more)

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.

Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.

Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.

2
Milan Griffes
Thanks! Is there a public-facing prioritized list of Rethink Priorities projects? (Just curious)
5
Peter Wildeford
Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019 We're working on writing up an update.

Between this, some ideas about AI x-risk and progress, and the unique position of the EA community, I'm beginning to think that "move Silicon Valley to cooperate with the US government and defense on AI technology" is Cause X. I intend to post something substantial in the future.

[anonymous]14
0
0

Can you expand on this answer? E.g. how much this is a focus for you, how long you've been doing this, how long you expect to continue doing this, etc.

6
Peter Wildeford
I'd refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj
1[anonymous]
The link didn't work properly for me. Did you mean the following comment?
3
Peter Wildeford
Yep :)

GiveWell is searching for cost-competitive causes in many different areas (see the "investigating opportunities" table).

Good point. Plausibly this is Cause X research (especially if they team up with Mark Lutter & co.); I'll be curious to see how far outside their traditional remit they go.

Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.

Comments12
Sorted by Click to highlight new comments since:

One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don't think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.

Their website suggests it wasn't independent.

'The primary issue for OPIS is the ethical imperative to reduce suffering. Linked to the effective altruism movement, they choose causes that are most likely to produce the largest impact, determined by what Leighton calls “a clear underlying philosophy which is suffering-focused”.'

I may be wrong, but I remember reading an EA profile report and seeing Leighton comment that the profile report inspired OPIS's movement toward working on the problem.

Michael Plant's cause profile on mental health seems like a plausible Cause X.

Wild-animal-suffering research seems like a plausible Cause X.

Founders Pledge cause report on climate change seems like a plausible Cause X.

I've always thought of "Cause X" as a theme for events like EAG that are meant to prompt thinking in EA, and wasn't ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don't think it ever should have been. I don't think it should be treated as such either. I don't see how it makes sense to anyone as a practical pursuit.

There have been some cause prioritization efforts that took 'Cause X' seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.

Since the question became reformulated as "Is x-risk reduction Cause X?," much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

In general, I've never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.

While they're disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.

It's taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: 'What is Cause X?'

They're not brought to attention much, but there are sources outlining what the 'fundamental assumptions' of EA are (what are typically called 'EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:

1. If one is confident one's current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.

2. If one is confident one's current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn't know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.

3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.

As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

https://www.openphilanthropy.org/research/cause-reports

I don't see how it makes sense to anyone as a practical pursuit.

GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.

That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority.

Pretty strongly disagree with this. I think there's a strong case for x-risk being a priority cause area, but I don't think it dominates all other contenders. (More on this here.)

The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.

Givewell's and Open Phil's worked wasn't termed 'Cause X,' but I think a lot of the stuff you're pointing to would've started before 'Cause X' was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:

  • institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
  • small, private non-profit organizations like Rethink Priorities.

Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn't know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.

The Qualia Research Institute is a good generator of hypotheses for Cause X candidates. Here's a recent example (a).

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not