Dylan Matthews has an interesting piece up in Vox, 'How effective altruism let SBF happen'. I feel very conflicted about it, as I think it contains some criticisms that are importantly correct, but then takes it in a direction I think is importantly mistaken. I'll be curious to hear others' thoughts.
Here's what I think is most right about it:
There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism... it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse.
Like many youth-led movements, there's a tendency within EA to be skeptical of established institutions and ways of running things. Such skepticism is healthy in moderation, but taken to extremes can lead to things like FTX's apparent total failure of financial oversight and corporate governance. Installing SBF as a corporate "philosopher-king" turns out not to have been great for FTX, in much the same way that we might predict installing a philosopher-king as absolute dictator would not be great for a country.
I'm obviously very pro-philosophy, and think it offers important practical guidance too, but it's not a substitute for robust institutions. So here is where I feel most conflicted about the article. Because I agree we should be wary of philosopher-kings. But that's mostly just because we should be wary of "kings" (or immature dictators) in general.
So I'm not thrilled with a framing that says (as Matthews goes on to say) that "the problem is the dominance of philosophy", because I don't think philosophy tells you to install philosopher-kings. Instead, I'd say, the problem is immaturity, and lack of respect for established institutional guard-rails for good governance (i.e., bureaucracy). What EA needs to learn, IMO, is this missing respect for "established" procedures, and a culture of consulting with more senior advisers who understand how institutions work (and why).
It's important to get this diagnosis right, since there's no reason to think that replacing 30 y/o philosophers with equally young anticapitalist activists (say) would do any good here. What's needed is people with more institutional experience (which will often mean significantly older people), and a sensible division of labour between philosophy and policy, ideas and implementation.
There are parts of the article that sort of point in this direction, but then it spins away and doesn't quite articulate the problem correctly. Or so it seems to me. But again, curious to hear others' thoughts.
Have you visited the 80,000 Hours website recently?
I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.
A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).
(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)
As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.
I’m glad you shared the J.S. Mill quote.
EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).
To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.
In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.
My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.
(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)