Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

44
18d
11
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
30
15d
After following the Ukraine war closely for almost three years, I naturally also watch China's potential for military expansionism. Whereas past leaders of China talked about "forceful if necessary" reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn't how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country's communist past, and is a conservative gambler: that is, he will take a gamble if the odds seem enough in his favor. Putin badly miscomputed his odds in Ukraine, but Russia's GDP and population were 1.843 trillion and 145 million, versus 17.8 trillion and 1.4 billion for China. At the same time, Taiwan is much less populous than Ukraine and its would-be defenders in the USA/EU/Japan are not as strong naval powers as China (yet would have to operate over a longer range). Last but not least, China is the factory of the world―if they should decide they want to do world domination military-style, they can probably do that fairly well while simultaneously selling us vital goods at suddenly-inflated prices. So when I hear China ramped up nuclear weapon production, I immediately think of it as a nod toward Taiwan. If we don't want an invasion of Taiwan, what do we do? Liberals have a habit of magical thinking in military matters, talking of diplomacy, complaining about U.S. "war mongers", and running protests with "No Nukes" signs. But the invasion of Taiwan has nothing to do with the U.S.; Xi simply *wants* Taiwan and has the power to take it. If he makes that decision, no words can stop him. So the Free World has no role to play here other than (1) to deter and (2) to optionally help out Taiwan if Xi invades anyway. Not all deterrents are military, of course; China and USA will surely do huge economic damage to each other if China
51
1mo
2
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
10
1mo
2
I’m working on a project to estimate the cost-effectiveness of AIS orgs, something like Animal Charity Evaluators does. This involves gathering data on metrics such as: * People impacted (e.g., scholars trained). * Research output (papers, citations). * Funding received and allocated. Some organizations (e.g., MATS, AISC) share impact analyses, there’s no broad comparison. AI safety orgs operate on diverse theories of change, making standardized evaluation tricky—but I think rough estimates could help with prioritization. I’m looking for: 1. Previous work 2. Collaborators 3. Feedback on the idea If you have ideas for useful metrics or feedback on the approach, let me know!
39
5mo
4
I just read Stephen Clare's 80k excellent article about the risks of stable totalitarianism.  I've been interested in this area for some time (though my focus is somewhat different) and I'm really glad more people are working on this.  In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity's future is considerably higher (though I haven't thought much about this). One point of disagreement may be the following. Stephen writes:  This is not clear to me. Stephen most likely understands the relevant topics way more than myself but I worry that autocratic regimes often seem to cooperate. This has happened historically—e.g., Nazi Germany, fascist Italy, and Imperial Japan—and also seems to be happening today. My sense is that Russia, China, Venezuela, Iran, and North Korea seem to have formed some type of loose alliance, at least to some extent (see also Anne Applebaum's Autocracy Inc.). Perhaps, this doesn't apply to strictly totalitarian regimes (though it did so for Germany, Italy and Japan in the 1940s).  Autocratic regimes control a non-trivial fraction (like 20-25%?) of World GDP. A naive extrapolation could thus suggest that some type of coalition of autocratic regimes will control 20-25% of humanity's future (assuming these regimes won't reform themselves). Depending on the offense-defense balance (and depending on how people trade off reducing suffering/injustive against other values such as national sovereignty, non-interference, isolationism, personal costs to themselves, etc.), this arrangement may very well persist.  It's unclear how much suffering such regimes would create—perhaps there would be fairly little; e.g. in China, ignoring political prisoners, the Uyghurs, etc., most people are probably doing fairly well (though a lot of people in, say, Iran aren't doing too well, see more below).
9
1mo
2
2 weeks out from the new GiveWell/GiveDirectly analysis, I was wondering how GHD charities are evaluating the impact of these results. For Kaya Guides, this has got us thinking much more explicitly about what we’re comparing to. GiveWell and GiveDirectly have a lot more resources, so they can do things like go out to communities and measure second order and spillover effects. On the one hand, this has got us thinking about other impacts we can incorporate into our analyses. Like GiveDirectly, we probably also have community spillover effects, we probably also avert deaths, and we probably also increase our beneficiaries’ incomes by improving productivity. I suspect this is true for many GHD charities! On the other, it doesn’t seem fair to compare our analysis on individual subjective wellbeing to GiveDirectly’s analysis that incorporates many more things. Unless we believed that GiveDirectly is likely to be systematically better, it’s not the case that many GHD charities got 3–4× less cost-effective relative to cash transfers overnight, they may just count 3–4× less things! So I wonder if the standard cash transfers benchmark might have to include more nuance in the near-term. Kaya Guides already only makes claims about cost-effectiveness ‘at improving subjective wellbeing’ to try and cover for this. Are other GHD charities starting to think the same way? Do people have other angles on this?
23
3mo
11
Of 1500 climate policies that have been implemented over the past 25 years, the 63 most successful ones are in this article (that I don't have access to, but a good summary is here). The 63 policies reduced between 0.6 billion and 1.8 billion metric tonnes CO2 emissions. The typical effects that the 63 most effective policies had, could close the emissions gap by 26%-41%. Pricing is most effective in developed countries, while regulations are the most effective policies in developing countries. The climate policy explorer shows the best policies for different countries and sectors. I just wanted to write this if EA:s who are interested in climate change and policy have missed this. Kind regards, Ulf Graf
25
4mo
9
An idea that's been percolating in my head recently, probably thanks to the EA Community Choice, is more experiments in democratic altruism. One of the stronger leftist critiques of charity revolves around the massive concentration of power in a handful of donors. In particular, we leave it up to donors to determine if they're actually doing good with their money, but people are horribly bad at self-perception and very few people would be good at admitting that their past donations were harmful (or merely morally suboptimal). It seems clear to me that Dustin & Cari are particularly worried about this, and Open Philanthropy was designed as an institution to protect them from themselves. However, (1) Dustin & Cari still have a lot of control over which cause areas to pick, and sort of informally defer to community consensus on this (please correct me if I have the wrong read on that) and (2) although it was intended to, I doubt it can scale beyond Dustin & Cari in practice. If Open Phil was funding harmful projects, it's only relying on the diversity of its internal opinions to diffuse that; and those opinions are subject to a self-selection effect in applying for OP, and also an unwillingness to criticise your employer. If some form of EA were to be practiced on a national scale, I wonder if it could take the form of an institution which selects cause areas democratically, and has a department of accountable fund managers to determine the most effective way to achieve those. I think this differs from the Community Choice and other charity elections because it doesn't require donors to think through implementation (except through accountability measures on the fund managers, which would come up much more rarely), and I think members of the public (and many EAs!) are much more confident in their desired outcomes than their desired implementations; in this way, it reflects how political elections take place in practice. In the near-term, EA could bootstrap such a fun
Load more (8/53)