Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

4
13h
1
With @gergo 's support, I'm starting a pilot project aiming to link up people who are extremely skilled at social media content creation (probably uni students, but not necessarily!) with academics who would love to spread awareness / knowledge of their subject area to the general public. This is as part of a wider effort to get more EAs with a diverse but previously under-utilised range of skills started on their impact journey. What are some neglected, academic ideas / bits of knowledge that would benefit from being widely spread to the general public through the medium of social media? and... Do you know anyone who's extremely skilled at social media whom I could approach? Someone who would either be interested in making the content or coaching aspiring content creators? Thanks in advance for your help!
4
2d
1
Researchers simulate an entire fly brain on a laptop. Is a human brain next? What is the implication of this for EA thinking? Does the fly that purely exists in the computer warrant moral consideration, and could we increase the overall welfare of the world by making millions of these simulations with ideal fruit-fly conditions?    They fully copied the brain of the fly, so from my understanding it should also feel pleasure and pain in theory, I think this poses a real conundrum for EA morality.
7
10d
It might genuinely be the time to boycott Chat GPT and start campaigns targeting corporate partners. But this isn't yet obvious. Even if so, what would be the appropriate concrete and reasonable asks? I think there is a bit of epistemic crisis emerging at the moment. If there's a case to be made, it needs to be made sooner rather than latter. And then we need coordination.
122
1y
14
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins. Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause. That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-v
28
2mo
5
Gavi's investment opportunity for 2026-2030 says they expect to save 8 to 9 million lives, for which they would require a budget of at least $11.9 billion[1]. Unfortunately, Gavi only raised $9 billion, so they have to make some cuts to their plans[2]. And you really can't reduce spending by $3 billion without making some life-or-death decisions. Gavi's CEO has said that "for every $1.5 billion less, your ability to save 1.1 million lives is compromised"[3]. This would equal a marginal cost of $1,607 $1,363 per life saved, which seems a bit low to me. But I think there is a good chance Gavi's marginal cost per life saved is still cheap enough to clear GiveWell's cost-effectiveness bar. GiveWell hasn't made grants to Gavi, though. Why? ---------------------------------------- 1. https://www.gavi.org/sites/default/files/investing/funding/resource-mobilisation/Gavi-Investment-Opportunity-2026-2030.pdf, pp. 20 & 43 ↩︎ 2. https://www.devex.com/news/gavi-s-board-tasked-with-strategy-shift-in-light-of-3b-funding-gap-110595 ↩︎ 3. https://www.nature.com/articles/d41586-025-02270-x ↩︎
36
4mo
6
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions Context: As I started filling out this expression of interest form to be a mentor for Sentient Futures' project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!) Last small update to add links to new things: January 30, 2026.  Animal-welfare-related research/work: 1. What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.) 1. How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.) 2. Robust ways to reduce wild animal suffering 1. New/underrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here). 2. Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research). 1. How do these and the safest consensual EAA interventions (see 1) interact? 3. Preventing the off-Earth replication of wild ecosystems. 3. Uncertainty on moral weights (some relevant context in this comment thread). 1. Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vasco Grilo, ...). 2. How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs? 3. What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.) 4. Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.) 5. The impact of agriculture on soil nematodes and other numerous soi
29
3mo
6
* Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree. * This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong. * (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees"). * If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons.  Tho lmk if you disagree. 
1
2d
I know EA leads to some weird places, but at the same time I think the EA movement is good at not getting too involved in questions of the day where an EA perspective is not needed, and could repel some from the movement. Presumably peace in the Middle East would be very good from an EA perspective, but there is a lot of debate on the Middle East already, no reason to try to inject a formal EA perspective on it. This is not to say that EA-adjacent individuals can't engage in the debate, as a form of personal hobby maybe. 
Load more (8/93)