Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

20
2d
4
* Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree. * This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong. * (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees"). * If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons.  Tho lmk if you disagree. 
33
18d
4
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions Context: As I started filling out this expression of interest form to be a mentor for Sentient Futures' project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!) Animal-welfare-related research/work: 1. What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.) 1. How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.) 2. Robust ways to reduce wild animal suffering 1. New/underrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here). 2. Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research). 1. How do these and the safest consensual EAA interventions (see 1) interact? 3. Preventing the off-Earth replication of wild ecosystems. 3. Uncertainty on moral weights (some relevant context in this comment thread). 1. Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vaso Grilo, ...). 2. How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs? 3. What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.) 4. Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.) 5. The impact of agriculture on soil nematodes and other numerous soil animals, in terms of total population. 6. Evaluating the backfir
121
9mo
14
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins. Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause. That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-v
52
1y
2
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
30
7mo
1
As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same. Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact. Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements. Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
44
1y
11
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
21
6mo
3
I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed. The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it.  I created a weighted factor model to figure out what projects or learning to take on first. I welcome feedback on it. There's also a schedule tab for how I'm planning to spend my time this year and a template if anyone wishes to use this spreadsheet their selves. I got feedback from my 80K hour advisor to get involved in EA communities more often. I'm also want to learn more publicly be it via forums or by blogging. This somewhat unstructured dumping of my thoughts is a first step towards that.
29
10mo
7
Is anyone in EA coordinating a response to the PEPFAR pause? Seems like a very high priority thing for US-based EAs to do, and I'm keen to help if so and start something if not.
Load more (8/82)