Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

37
14d
9
Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of: 1. (Systematically) exploring cause areas 2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency 3. Sharing their list and reasons publicly.[2] The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list. Related things I appreciate, but aren't quite what I'm envisioning: * Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions. * Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning. * Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process. There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus. If you know of other public writeups and explanations of ranked lists, please share them in the comments![3] 1. ^ Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means. 2. ^ I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly: - Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming), 
46
20d
7
The meat-eater problem is under-discussed. I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today. https://forum.effectivealtruism.org/topics/meat-eater-problem (I had sometimes thought about the problem, but I didn't even know it had a name)
7
12d
1
In July 2022, Jeff Masters wrote an article (https://yaleclimateconnections.org/2022/07/the-future-of-global-catastrophic-risk-events-from-climate-change/) summarizing findings from a United Nations report on the increasing risks of global catastrophic risk (GCR) events due to climate change. The report defines GCRs as catastrophes that kill over 10 million people or cause over $10 trillion in damage. It warned that by increasingly pushing beyond safe planetary boundaries, human activity is boosting the odds of climate-related GCRs. The article argued that societies are more vulnerable to sudden collapse when multiple environmental shocks occur, and that the combination of climate change impacts poses a serious risk of total societal collapse if we continue business as usual. Although the article and report are from mid-2022, the scientific community has been messaging that climate change effects are increasing faster than models predicted. So I'm curious - what has the EA community been doing over the past year to understand, prepare for and mitigate these climate-related GCRs? Some questions I have: * What new work has been done in EA on these risks since mid-2022, and what are the key open problems? * How much intellectual priority and resources is the EA community putting towards climate GCRs compared to other GCRs? Has this changed in the past year, and is it enough given the magnitude of the risks? I see this as different than investing in interventions that address GHGs and warming.  * How can we ensure these risks are getting adequate attention? I'm very interested to hear others' thoughts. While a lot of great climate-related work is happening in EA, I worry that climate GCRs remain relatively neglected compared to other GCRs. 
48
4mo
13
Often people post cost-effectiveness analyses of potential interventions, which invariably conclude that the intervention could rival GiveWell's top charities. (I'm guilty of this too!) But this happens with such frequency, and I am basically never convinced that the intervention is actually competitive with GWTC. The reason is that they are comparing ex-ante cost-effectiveness (where you make a bunch of assumptions about costs, program delivery mechanisms, etc) with GiveWell's calculated ex-post cost-effectiveness (where the intervention is already delivered, so there are much fewer assumptions). Usually, people acknowledge that ex-ante cost-effectiveness is less reliable than ex-post cost-effectiveness. But I haven't seen any acknowledgement that this systematically overestimates cost-effectiveness, because people who are motivated to try and pursue an intervention are going to be optimistic about unknown factors. Also, many costs are "unknown unknowns" that you might only discover after implementing the project, so leaving them out underestimates costs. (Also, the planning fallacy in general.) And I haven't seen any discussion of how large the gap between these estimates could be. I think it could be orders of magnitude, just because costs are in the denominator of a benefit-cost ratio, so uncertainty in costs can have huge effects on cost-effectiveness. One straightforward way to estimate this gap is to redo a GiveWell CEA, but assuming that you were setting up a charity to deliver that intervention for the first time. If GiveWell's ex-post estimate is X and your ex-ante estimate is K*X for the same intervention, then we would conclude that ex-ante cost-effectiveness is K times too optimistic, and deflate ex-ante estimates by a factor of K. I might try to do this myself, but I don't have any experience with CEAs, and would welcome someone else doing it.
12
1mo
4
Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues? Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…” But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.” OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized. Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.
35
6mo
5
[edit: a day after posting, I think this perhaps reads more combative that I intended? It was meant to be more 'crisis of faith, looking for reassurance if it exists' than 'dunk on those crazy longtermists'. I'll leave the quick take as-is, but maybe clarification of my intentions might be useful to others] Warning! Hot Take! 🔥🔥🔥 (Also v rambly and not rigorous) A creeping thought has entered my head recently that I haven't been able to get rid of...  The EA move toward AI Safety and Longtermism is often based on EV calculations that show the long term future is overwhelmingly valuable, and thus is the intervention that is most cost-effective. However, more in-depth looks at the EV of x-risk prevention (1, 2) cast significant doubt on those EV calculations, which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones. But my doubts get worse... GiveWell estimates around $5k to save a life. So I went looking for some longtermist calculations, and I really couldn't fund any robust ones![1] Can anyone point me in some robust calculations for longtermist funds/organisations where they go 'yep, under our assumptions and data, our interventions are at least competitive with top Global Health charities'? Because it seems to me like that hasn't been done. But if we're being EA, people with high and intractable p(doom) from AI shouldn't work on AI, they should probably EtG for Global Health instead (if we're going to be full maximise EV about everything). Like, if we're taking EA seriously, shouldn't MIRI shut down all AI operations and become a Global Health org? Wouldn't that be a strong +EV move given their pessimistic assessments of reducing xRisk and their knowledge of +EV global health interventions? But it gets worse... Suppose that we go, 'ok, let's take EV estimates seriously but not literally'. In which case fine, but that undermines the whole 'longtermist interventions overwhelmingly dominate EV' move t
48
9mo
22
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas. * (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them. * (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much. * (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less. Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
28
7mo
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
Load more (8/31)