I worry that some EAs consider certain interventions to be interventions with high expected value (high EV) because the intervention tackles a major EA cause area, rather than using major EA cause areas as a tool to identify high EV interventions. This strikes me as “thinking in the wrong direction”, and seems wrong because I think we should expect there to be many, many potential interventions in global health and development, reducing existential risk and improving animal welfare that have low expected value (low EV).

As a result of this error, I think some EAs overvalue some interventions in major EA cause areas, and undervalue some interventions that are not in major EA cause areas.

Because of one of the problems with the ITN framework (that we seem to switch between comparing problems and interventions when we move between importance, tractablity and neglectedness), I think it may help and be more accurate to view the major EA cause areas as areas where high EV interventions should be easier to find, and to view other cause areas as areas where high EV interventions should be harder to find.

Thinking in these terms would mean being more open to interventions that aren't in major EA cause areas.

The main examples where I think EAs may underestimate the EV of an intervention because it doesn't involve a major EA cause area are those where a particular form of activism / social movement / organisation could potentially be made more efficient / effective at a low cost. There are probably many such examples, with some having much greater EV and some having much smaller EV, but 2 examples I'd provide are: a) starting a campaign for the USA to recognise Palestine (https://forum.effectivealtruism.org/posts/qHhLrcDyhGQoPgsDg/should-someone-start-a-grassroots-campaign-for-usa-to) b) identifying areas and ethnic groups internationally at greatest risk of genocide / ethnic violence and trying to redirect funding for western anti-racism movements towards these areas

From discussion in comments: One general point I'd like to make is if a proposed intervention is "improving the efficiency of work on cause X", a large amount of resources already being poored into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation).

Comments13
Sorted by Click to highlight new comments since:

Hi! I was one of the downvoters on your earlier post about Israel/Palestine, but looking at the link again now, I see that nobody ever gave a good explanation for why the post got such a negative reception. I'm sorry that we gave such a hostile reaction without explaining. I can't speak for all EAs, but I suspect that some of the main reasons for hesitation might be:

  • Israel-related issues are extremely politically charged, so taking any stance whatsoever might risk damaging the carefully non-politicized reputation that other parts of the EA movement have built up. I imagine that EAs would have similar hesitation about taking a strong stance on abortion rights (even though EAs often have strong views on population ethics), or officially endorsing a candidate in a US presidential election (even though the majority of EAs are probably Democrats).
  • The Israel/Palestine conflict is the opposite of neglected -- tons of media coverage, hundreds of activist groups, and lots of funding on both sides. A typical EA might argue that it would be better for a newly-formed activist group to focus on something like the current situation in Chad, which attracts hundreds of times less media coverage although a much larger number of people have died. (Of course, raw death toll isn't the final arbiter of cause importance -- Israel is a nuclear power, after all, so its decisions have wide ramifications.)
  • For whatever reason, the Israel/Paletine conflict has gained a specific reputation as a devilishly intractable diplomatic puzzle -- there's little agreement on any obvious solutions that seem like they could resolve the biggest problems.

I'm more positive about your second idea -- trying to identify the areas at greatest risk of conflict throughout the whole world and take actions to calm tensions before violence erupts. To some extent, this is the traditional work of diplomacy, international NGOs, etc, but these efforts could perhaps be better-targeted, and there are probably some unique angles here that EAs could look into. While international attention from diplomats and NGOs seems to parachute into regions right at the moment of crisis, I could imagine EAs trying to intervene earlier in the lead-up to conflicts, perhaps running low-cost radio programs trying to spread American-style values of tolerance and anti-racism. I could also imagine taking an even longer-term view, and trying to investigate ways to head off the root causes of political tension and violence on a timespan of decades or centuries. (Here is a somewhat similar project examining what gave rise to positive social movements like slavery abolitionism.)

Hi, thanks for providing those reasons, I can totally see the rationale!

One general point I'd like to make is if a proposed intervention is "improving the efficiency of work on cause X", a large amount of resources already being poured into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation, and as you say, some may not feel this way about some pro-Palestinian activism).

FWIW, this is pretty much the rationale behind the climate recs of FP, we recommend orgs we think can leverage the enormous societal resources poured into climate into the most productive uses within the space. In line with your reasoning we also think that events that increase overall allocation to climate might improve the cost-effectiveness of the climate recs (e.g. Biden's victory leading to higher returns).

I would also think (though don't know for certain) that OPP's recent bid to hire in global aid advocacy would draw on a similar theory of change, improving resource allocation in a field that is, comparatively speaking, not neglected.

identifying areas and ethnic groups internationally at greatest risk of genocide / ethnic violence and trying to direct funding for anti-racism movements towards these areas

You might be interested in previous discussion of genocide prevention as a cause area here

I'm skeptical that funding 'anti-racism' movements would make sense as an intervention though, at least in the contemporary 'woke' sense of the phrase. Many prominent 'anti-racist' memes, like that the relative lack of success of one ethnic group should be attributed to exploitation by another, can increase racial tensions, and are similar to those used to justify genocides in the past.

I see this sentence as suggesting capitalizing on the (relative) popularity of anti-racism movements and trying to use society's interest in anti-racism toward genocide prevention.

Yep exactly that!

It would help if you provided examples.

Thanks for the suggestion, I've added an attempt at this to the post

Now that you’ve given examples, can you provide an account of how increased funding in these areas can lead to improved well-being / preserves lives or DALYs / etc in expectation? Do you expect that targeted funds could be cost-competitive with GW top charities or likewise?

So in both of the examples provided, EAs would be funding / carrying out interventions that improve the effectiveness of other work, and it is this other work that would improve well-being / preserve lives in expectation.

Because I suspect that these interventions would be relatively cheap, and because this other work would already have lots of resources behind it, I think these interventions would slightly improve the effectiveness with which a large amount of resources are spent, to the extent that the interventions could compare with GW top charities in terms of expected value.

While I’m skeptical about the idea that particular causes you’ve mentioned could truly end up being cost effective paths to reducing suffering, I’m sympathetic to the idea that improving the effectiveness of activity in putatively non-effective causes is potentially itself effective. What interventions do you have in mind to improve effectiveness within these domains?

I think the interventions would be very specific to the domain. I mentioned an intervention to direct pro-Palestinian activism towards a tangible goal, and with redirecting western anti-racism work towards international genocide prevention, this could possibly be done by getting western anti-racism organisations to partner with similar organisations in countries with greater risk of genocides, which could lead to resource / expertise sharing over a long period of time.

Props for writing the post you were thinking about!

Overwhelmingly, the things you think of as "EA cause areas" translate to "areas where people have used common EA principles to evaluate opportunities". And the things you think of as "not in major EA cause areas" are overwhelmingly "areas where people have not tried very hard to evaluate opportunities".

Many of the "haven't tried hard" areas are justifiably ignored, because there are major factors implying there probably aren't great opportunities (very few people are affected, very little harm is done, or progress has been made despite enormous investment from reasonable people, etc.)

But many other areas are ignored because there just... aren't very many people in EA. Maybe 150 people whose job description is something like "full-time researcher", plus another few dozen people doing research internships or summer programs? Compare this to the scale of open questions within well-established areas, and you'll see that we are already overwhelmed. (Plus, many of these researchers aren't very flexible; if you work for Animal Charity Evaluators, Palestine isn't going to be within your purview.)

Fortunately, there's a lot of funding available for people to do impact-focused research, at least in areas with some plausible connection to long-term impact (not sure what's out there for e.g. "new approaches in global development"). It just takes time and skill to put together a good application and develop the basic case for something being promising enough to spend $10k-50k investigating.

I'll follow in your footsteps and say that I want to write a full post about this (the argument that "EA doesn't prioritize X highly enough") sometime in the next few months.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier