On August 8th, Robert Wiblin, owner of probably the most intellectually stimulating facebook wall of all time asked “What past social movement is effective altruism most similar to?” This is a good question and there were some interesting answers. In the end, the most liked post (well actually the second, after Kony 2012), was ‘evidence-based medicine’. I think effective altruism used to have a lot of similarities to evidence-based medicine but is increasingly moving in the opposite direction.
What is it that makes them similar? Obviously a focus on evidence. “Effective altruism is a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world.” (Wikipedia)
The trouble is, evidence and reason aren’t the same thing.
Reason, in effective altruism seems to often be equated with maximising expected utility. It is characterised by organisations like the Future of Humanities Institute and often ends up prioritising things for which we have almost no evidence, like protection against deathbots.
Evidence is very different. It’s about ambiguity aversion, not maximising expected utility. It's a lot more modest in its aims and is characterised by organisations like Givewell, prioritising charities like AMF or SCI, for which we have a decent idea of the quantum of their effects.
I place myself in the evidence camp. One of the strengths of evidence-based medicine in my view is that it realises the limits of our rationality. It realises that, actually, we are VERY bad at working out how to maximise expected utility through abstract reasoning so we should actually test stuff empirically to find out what works. It also allows consensus-building by decreasing uncertainty.
I’m not saying there isn’t room for both. There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions. I’m just not sure they should be the same people.
There are also similarities between the two camps. They’re both motivated by altruism and they’re both explicitly consequentialist, or at least act like it. The trouble is, they also both claim to be doing the most good and so in a way they disagree. Maybe I shouldn’t be worried about this. After all, healthy debate within social movements is a good thing. On the other hand, the two camps often seem to have such fundamentally different approaches to the question of how to do the most good that it is difficult to know if they can be reconciled.
In any case, I think it can only be a good thing that this difference is explicitly recognised.
I think most philosophers would say that evidence and reason are different because even if practical rationality requires that you maximize expected utility in one way or another---just as theoretical rationality requires that you conditionalize on your evidence---neither thing tells you that MORE evidence is better. You can be a perfectly rational, perfectly ignorant agent. That more evidence is better than less is a separate kind of epistemological principle than the one that tells you to conditionalize on whatever you've managed to get.(1)
Another way to put it: more evidence is better from a first-person point of view: if you can get more evidence before you decide to act, you should do it! But from the third person point of view, you shouldn't criticize people who maximize expected utility on the basis of bad or scarce evidence.
Here's a quote from James Joyce (a causal decision theorist):
"CDT [causal decision theory] is committed to two principles that jointly entail that initial opinions should fix actions most of the time, but not [always]...
CURRENT EVALUATION. If Prob_t(.) characterizes your beliefs at t, then at t you should evaluate each act by its causal expected utility using Prob_t(.).
FULL INFORMATION. You should act on your time-t utility assessment only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause." (Joyce, "Regret and Instability in Causal Decision Theory," 126-127)
...only the first principle is determined by the utility-maximizing equation that's at the mathematical core of causal decision theory. Anyway, that's my nerdy philosophical lit contribution to the issue ;).
(1) In an extreme case, suppose you have NO evidence---you are in the "a priori position" mentioned by RyanCarey. Then reason is like an empty stomach, with no evidence to digest. But still it would contribute the tautologies of pure logic---those are propositions that are true no matter what you conditionalize on, indeed whether you conditionalize on anything at all.