Hide table of contents

I previously included a link to this as part of my trilogy on anti-philanthropic misdirection, but a commenter asked me to post the full text here for the automated audio conversion. Apologies to anyone who has already read it.

As I wrote in ‘Why Not Effective Altruism?’, I find the extreme hostility towards effective altruism from some quarters to be rather baffling. Group evaluations can be vexing: perhaps what the critics have in mind when they hate on EA has little or no overlap with what I have in mind when I support it? It’s hard to know without getting into details, which the critics rarely do. So here are some concrete claims that I think are true and important. If you disagree with any of them, I’d be curious to hear which ones, and why!

What I think:

  1. It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
  2. It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.
  3. We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
  4. In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
  5. In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)
  6. Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
  7. So it’s good and virtuous to use quantitatively tools and evidence wisely.
  8. GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
  9. So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
  10. There’s no good reason to think that GiveWell’s top charities are net harmful.[1]
  11. But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
  12. Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
  13. Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.
  14. Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
  15. The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
  16. Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
  17. In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.
  18. Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
  19. It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
  20. Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more.
  21. Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.
  22. Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
  23. Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
  24. Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
  25. When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
  26. Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
  27. Deliberately or negligently making the world worse is vicious, bad, and wrong.
  28. Most (all?) of us are not as effectively beneficent as would be morally ideal.
  29. Our moral motivations are very shaped by social norms and expectations—by community and culture.
  30. This means it is good and virtuous to be public about one’s efforts to do good effectively.
  31. If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
  32. In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
  33. For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
  34. That’s what the “Effective Altruism” community constitutively aims to do.
  35. It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
  36. Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
  37. Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)
  38. No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
  39. The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
  40. The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
  41. If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.
  42. None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)

Things I don’t think

I don’t think:

  • that people should blindly follow crude calculations, or otherwise attempt to directly implement ideal theory in practice without properly taking into account our cognitive limitations
  • that you’re obligated to dedicate your entire life to maximizing the good, neglecting your loved ones and personal projects. (The suggestion is just that it would be good and virtuous for advancing impartially effective beneficence to be among one’s life projects.)
  • that we should care about numbers rather than people (rather, as suggested above, I think we should use numbers as a tool to enable us to help more people)
  • that we should completely ignore present-day needs in pursuit of tiling the universe with digital experience machines
  • that double or nothing existence gambles are worth taking
  • that inexperienced, self-styled “rationalist” EAs are thereby competent to run important organizations (just based on a priori first principles)
  • that you should trust someone with great power (e.g. unregulated control of AI) just because they identify as an “EA” (let alone a “rationalist”).

Conclusion: Beware of Stereotypes

A few months ago, Dustin Moskovitz (the billionaire funder behind Open Philanthropy) wrote some very thoughtful reflections on “the long journey to doing good better”. I highly recommend it. I was especially taken by his comments on why outside perceptions of a movement can seem so alien to those within it:

When a group has a shared sense of identity, the people within it are still not all one thing, a homogenous group with one big set of shared beliefs — and yet they often are perceived that way. Necessarily, the way that you engage in characterizing a group is by giving it broad, sweeping attributes that describe how the people in the group are similar, or distinctive relative to the broader world. As an individual within a group trying to understand yourself, however, this gets flipped, and you can more easily see how you differ. Any one of those sweeping attributes do apply to some of the group, and it’s hard to identify with the group when you clearly don’t identify with many of the individuals, in particular the ones with the strongest beliefs. I often observe that the people with the most fringe opinions inside a group paradoxically get the most visibility outside the group, precisely because they are saying something unfamiliar and controversial.

(Though I also think that critics often just straw man their targets.)

Anyway, I hope my above listing proves illuminating to some. I would be especially curious to hear from the haters of EA about which numbered points they actually disagree with (and why).[3] Perhaps there will turn out to be such fundamental disagreements that reasoned conversation is pointless? But you never know until you try.

 

  1. ^

    For example, what empirical evidence we have on the question suggests that Deaton’s speculative worries about political accountability are easily addressed: “Political accountability is not necessarily undermined by foreign aid: even illiterate and semi-literate folks in rural Bangladesh appear to be quite sophisticated about how they evaluate their leaders, given the information they possess. Further, any unintended negative accountability consequences were effectively countered by a simple, scalable information campaign.”

  2. ^

    Not to mention the standard practical advice of the utilitarian tradition, as I’ve known ever since I was an undergrad (sadly many senior philosophers persist in misrepresenting it).

  3. ^

    To explain my curiosity: most anti-EA criticism I’ve come across to date, especially by philosophers, has struck me as painfully stupid entirely missing the point. It doesn’t help that it’s all so unrelentingly hostile—which makes me question whether it’s in good faith, as it prima facie seems a rather inexplicably vicious attitude to take towards people who are trying to do good, often at significant personal cost! If any critics reading this are capable of explaining their precise disagreements with me (not an imagined straw-EA) in a civil tone, I’d be delighted to hear it.

57

4
0

Reactions

4
0

More posts like this

Comments18
Sorted by Click to highlight new comments since:

You've caught me stuck in bed, and I'm probably the most EA-critical person that regularly posts here, so I'll take a stab at responding point by point to your list:

  1. It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
  1. Agree.
  1. It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.

2. Agree.

  1. We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).

3. Agree on global poverty and animal welfare, but I think it might be difficult to do "a lot of good" in some catastrophic risk areas. 

  1. In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.

4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood. 

  1. In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)

5. Agreed for global poverty and animal welfare, but I'm mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact. 

  1. Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.

6. Agreed. Unfortunately, EA often fails to live up to this idea.

  1. So it’s good and virtuous to use quantitatively tools and evidence wisely.

7. Agreed, but see above.

  1. GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.

8. Agreed, I like givewell in general.

  1. So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.

9. Agreed, with regards to the area givewell specialises in.

  1. There’s no good reason to think that GiveWell’s top charities are net harmful.[1]

10. I think the chances that givewells top charities are net good is very high, but not 100%. See the  mosquito net fishing for a possible pitfall. 

  1. But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)

11.Agreed.

  1. Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.

12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don't think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.  

  1. Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.

13. I agree in principle, but disagree in practice given the "hits based giving" of EA can be pretty bad. The effectiveness of hits based giving very much depends on how much each miss costs and the likely effectiveness of a hit. I don't think the 100,000 grant for a failed video game was a good idea, nor the $28000 to print out harry potter fanfiction that was free online anyway

  1. Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)

14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here. 

  1. The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.

15. Generally agree. 

  1. Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.

16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think "overthrow the government for communism" gets the sideye: "universal healthcare" does not. 

  1. In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.

Some people are poor and cannot contribute much without kneecapping themselves. I don't think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter. 

  1. Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.

I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results. 

  1. It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.

Agreed, depending on what you mean by "reasonable". 

  1. Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more

Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.

  1. Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.

The term "fully virtuous agent" raises my eyebrows. I don't think that's a thing that can actually exist. 

  1. Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.

Agreed, with emphasis on the "permissible means". 

  1. Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)

It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don't see evidence that this is the case for these examples. 

  1. Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)

agreed

  1. When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)

Agreed, although I'll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny. 

  1. Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.

Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don't think this applies to critiques of AI x-risk, because I don't think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good. 

  1. Deliberately or negligently making the world worse is vicious, bad, and wrong.

Agreed.

  1. Most (all?) of us are not as effectively beneficent as would be morally ideal.

Agreed

  1. Our moral motivations are very shaped by social norms and expectations—by community and culture.

Agreed

  1. This means it is good and virtuous to be public about one’s efforts to do good effectively.

Generally agreed.

  1. If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.

Agreed

  1. In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.

Agreed, but "in principle" is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong. 

  1. For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.

Depends on which community we are talking about. See again: the Bolsheviks. 

  1. That’s what the “Effective Altruism” community constitutively aims to do.

agreed.

  1. It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).

Agreed on all statements.

  1. Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.

Agreed.

  1. Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)

There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired.  It remains to be seen as to whether ongoing efforts are enough. 

  1. No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.

I agree that individuals should not be tarred by SBF, but I don't think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don't see OXFAM having this kind of effect. It's n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms. 

The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.

  1. The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.

I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better. 

  1. The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.

Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it's health, and make it's decision making and effectiveness worse. 

  1. If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.

Agreed. I generally steer people to givewell or it's charities, rather than 

  1. None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)

I think some of the claims are less valuable outside of utilitarianism, but whatever.

With that all answered, let me add my own take on why I don't recommend EA to people anymore:

I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory. 

Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour. 

Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off. 

> Says he's stuck in bed and only going to take a stab

> Posts a thorough, thoughtful, point-by-point response to the OP in good faith

> Just titotal things

-      -     -     -      -     -     -      -     -     -      -     -     -      -     -     

On a serious note, as Richard says it seems like you agree with most of his points, at least on the 'EA values/EA-as-ideas' set of things. It sounds like atm you think that you can't recommend EA without recommending the speculative AI part of it, which I don't think has to be true.

I continue to appreciate your thoughts and contributions to the Forum and have learned a lot from them, and given the reception you get[1] I think I'm clearly not alone there :)

  1. ^

    You're probably by far the highest-upvoted person who considers them EA critical here? (though maybe Habryka would also count)

"enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory"

Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I've ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it's clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well. 

Given how many of the frontier AI labs have an EA-related origin story, I think it's totally plausible that the EA AI xrisk project has been net negative.

Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, "net negative in expectation" is compatible with "probably mostly harmless". I.e. the expected value of X can be very negative, even while the chance of the claim "X did (actual not expected) harm" turning out to be true is low. If you don't really buy the arguments for AI X-risk but you do buy the argument for "very small increases in X-risk are really bad" you might think that. On some days, I think I think that, though my views on all this aren't very stable. 

That seems reasonable to me! I'm most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there's plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) "do EA better" by just sticking with GiveWell or whatever you think is actually best.

I enjoyed reading your responses to these points. Thanks for taking the time to write them out.

There’s no good reason to think that GiveWell’s top charities are net harmful.

The effects on farmed animals and wild animals could make GiveWell top charities net harmful in the near term. See Comparison between the hedonic utility of human life and poultry living time and Finding bugs in GiveWell's top charities by Vasco Grilo.

My own best guess is that they're net good for wild animals based on my suffering-focused views and the resulting reductions of wild arthropod populations. I also endorse hedging in portfolios of interventions.

Thanks for pointing that out, Michael! I should note I Fermi estimated accounting for farmed animals only decreases the cost-effectiveness of GiveWell's top charities by 8.72 %. However, this was without considering future increases in the consumption of animals throughout the lives of people who are saved, which usually follow economic growth. I also Fermi estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experiences of all humans alive, which suggests saving a random human life results in a nearterm increase in suffering.

Related, this is likely a nitpick, but I think there might be some steelman-able views of "GiveWell top charities might seem net-negative on a longtermist lens, which could outweigh the shorter term implications". 

Personally, i have a ton of uncertainty here (I assume most do) and have not thought about this much. Also, I assume that from a longtermist lens, the net impact either way is likely small compared to more direct longtermist actions. 

But I think that on many hard and complex issues, it's really hard to say "there's no good reason for one side" very safely. Often there are some good reasons on both sides.

I find that it's often the case where there aren't any highly-coherent arguments raised for one side of an issue - but that's a different question than asking if intelligent arguments could be raised.

Ya, someone might argue that the average person contributes to economic growth and technological development, and so accelerates and increases x-risk. So, saving lives and increasing incomes could increase x-risk. Some subgroups of people may be exceptions, like EAs/x-risk people or poor people in low-income countries (who are far from the frontier of technological development), but even those could be questionable.

I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more "direct", explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA "worldview" here.

I'd be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.

I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth

I think I've almost never heard this argued, and I'd be surprised if it were true. 
[Edit: Sorry - I just saw your link, where this was argued. I think the discussion there in the comments is good]
- GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others.
- As Robin Hanson noted, if you'd just save up money, you could often make a higher return than by donating it to people today.
- I believe that there's little evidence yet to show that funding AMF/GiveDirectly results in large (>5-7% per year) long-term economic / political gains. I would be very happy if this evidence existed! (links appreciated, at any point)

Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible). 

As for it being *possibly* net-negative:
- We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a "meta-reason" than a "reason").
- If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.

----REMINDER - PLEASE DON'T TAKE THIS OUT OF CONTEXT----

- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, I've heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility).
- If we think AI is likely to come soonish, we might want to preserve most resources for after it. 
- This is an awkward/hazardous thing to discuss. If it were the case that there were good arguments, perhaps we'd expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it. 

Again, I have an absolute ton of uncertainty on this, and my quick guess is more, "it's probably a small-ish longtermist deal, with a huge probability spread", than "I'm fairly sure it's net-negative."

I feel like it's probably important for EAs to have reasonable/nuanced views on this topic, which is why I wrote these thoughts above.

I'll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Beckstead's dissertation) I added that ugly line in-between to maybe help a bit here. 

I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isn't from EAs specifically. I would argue that many of these arguments are still about "optimizing for the long-term", they just often use different underlying assumptions than EAs do. 

https://chatgpt.com/share/b8a9a3f5-d2f3-4dc6-921c-dba1226d25c1

I'll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that we'll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance. 

Yeah, that's interesting, but the argument "we should consider just letting people die, even when we could easily save them, because they eat too much chicken," is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being "harmful"!

(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn't hear that implication expressed so often.)

You probably didn't have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.

On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren't that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).

I think EA also encourages too much certainty in descriptive assessment of people's altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone's true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).

On 25, I think the "safe option" is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying - I haven't looked into it myself), then that seems bad on their part.

On 37, where has EA messaging emphasized downside risk more? A text search for "downside" and "risk" on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).

On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.

There’s no good reason to think that GiveWell’s top charities are net harmful.

Blanket deworming is a Pascal wager. GiveWell's assessment is that the small number of studies are probably wrong, but the claimed effect is so big that it's worth trying. Net of this zero effect, you must subtract the cost: drugs so awful they cause riots. GiveWell does not attempt to measure this cost. Maybe you accept the gamble, but this item seems worded to avoid that framing. Or maybe you drop the huge income effects and retreat to the health effects. How many children should you poison to cure one of parasites? GiveWell does not say.

You should be suspicious of the reality of the income effect because it is so much larger than the health effect. The really bad hypothesis is that the income effect is real, but unrelated to health (alt link).

Curated and popular this week
Relevant opportunities