I am an Effective Altruist. I think it is good to help people in the most efficient way we can. I believe quite firmly in “shut-up and multiply” — that is, we cannot lose sight of magnitudes. If a positive and a negative are of wildly different weights, then we should go with the one which is much larger than the other, not simply say that there are positives and negatives.
In the past couple years, EA was hurt by the collapse of Sam Bankman-Fried’s network of companies. This has led to skepticism around becoming too closely associated with any one figure. We want to be independent, and to be seen as independent.
But I think this is hooey. We need to consistently shut up and multiple! SBF donated about $190 million to charity. The scale of this is perhaps hard to grasp. Let’s say that the reputational damage stopped some people from donating to effective charities. Let’s also say that these people would have donated $5,000 each (about what is needed to save one life from malaria). The reputational damage from SBF would have to have stopped 38,000 people from donating, for it to turn into a net negative. Is that plausible?
A billion dollars is a lot of money. It’s a stupefying amount of money, truly. The gain from convincing one billionaire to donate one billion, is greater than encouraging hundreds of thousands of people to chip in a few thousand. As such, EA should explicitly focus on recruiting the rich, or the to be rich. Effective Altruism MIT seems extremely cost-effective! Surely this is something worth investing relatively trivial resources in.
I've wondered about this as well. Scott Alexander's mistake #57 seems like a relevant starting point:
One guess as to how Scott's link (US VSL) might underestimate the value destruction is that in Nov 2021, GiveWell aimed to direct ~$1bn annually by 2025, and the year after they revised downward their future funding projections due in part to their main donor Open Phil revising downward its planned 2022 allocation to GW due to a ~40% reduction in their asset base since the end of last year from "the recent stock market decline" changing their portfolio allocation, in particular more than proportionally reducing their GW allocation (partly offset by GW overachieving by ~40% RFMF-wise in finding cost-effective opportunities). My low-confidence guess is that "the recent stock market decline" has quite a bit to do with FTX. It seems likely to me that the NPV of the projected funding reduction to GW is >$1bn over the next say decade at ~5k per life saved (~1,000x the US VSL Scott linked to), or >200k lives that could have been saved but weren't, most of them children under 5. (This galls me, to be honest, so I'd like to be told my reasoning is wrong or something.)
That's one guess; I'm sure there are more I'm missing that's still BOTEC-able, let alone the knock-on effects on social trust from fraud Scott mentioned.
To the OP: I think it's worth reflecting on the warning that maximization is perilous.