A familiar pattern: EA organizations promote charities that help people in the developing world. A critic accuses EA of forcing people to be rationality robots. EA defends the use of rationality in altruistic decisions. But both sides miss the point: it demonstrates at best a lack of imagination and at worst coldheartedness to think that only a rationality robot would believe that African lives matter. I'm guilty of this too: it reveals my own prejudices when I think about helping people in the developing world (or livestock) as “giving from the head”, rather than “giving from the heart”. Promoting EA will require changing values, not just making people more rational.
People are not malfunctioning utilitarian robots
Frequently, EA outreach starts from the implicit assumption that, deep down, people value all lives equally. In this narrative, the reason that people don't give to GiveWell-recommended charities is Kahneman-style irrationality. For example, supposedly people have biases such as scope neglect that prevent them from implementing their consequentialist values.
A typical EA example is the comparison between paying for a guide dog to help a blind person in the developed world versus curing many people of blindness in the developing world. To a utilitarian, choosing the former could only result from irrationality. But it's plausible that most people aren't utilitarians and don't care very much about people in the developing world. Even in surveys of philosophers, who would be expected to be more utilitarian than the general population, only a quarter are purely consequentialist.
Rationality alone probably won't lead to EA
Some people might argue that non-utilitarians will become utilitarian if they become more rational. This argument relies implicitly on a belief in moral convergence, which is difficult to defend if one rejects moral realism, as many EAs do. These are very complex debates, which I'll discuss more in a followup post, but the idea that EAs can be created through rationality training alone should be viewed with skepticism. (This is another reason I'm skeptical of the ability of CFAR and similar organizations to have a positive effect outside of some very specific populations.)
A comes before E
In short, people can't optimize for values that they don't have. For the majority of ordinary people, who don't share the egalitarian, utilitarian-ish values of EA, “the most good you can do” is meaningless. This means that we need to start by spreading our values, before talking about implementation. Though rationality exercises won't be useful for this, countless social movements have proven that it is possible to change people's values, typically by combining various types of emotional appeals. Research into the causes of changes in values will be extremely important for the future of EA.
Expanding the circle of compassion
Instead of “the most good you can do”, a better message for some audiences may be “expanding the circle of compassion”. The idea that human culture has become more enlightened by being compassionate to those different from ourselves is catchy, emotionally appealing, and tends to approximate utilitarianism in practice. It may be particularly suited to some audiences, such as religious organizations.
During the holiday season, it's nice to return to the compassionate roots of effective altruism. As Julia Wise says in this excellent post, there's no shame in giving from the heart.
Good points. I agree that EA's message if often framed in a way that can seem alienating to people who don't share all its assumptions. And I agree that the people who don't share all the assumptions are not necessarily being irrational.
FWIW, I think there's indeed a trend. Teaching rationality can be kind of a dick move (only in a specific sense) because it forces you to think about consequentialist goals and opportunity costs, which is not necessarily good for your self-image if you're not able to look back on huge accomplishments or promising future prospects. As long as your self-image as a "morally good person" is tied to common-sense morality, you can do well by just not being an asshole to the people around you. And where common-sense morality is called into question, you can always rationalize as long as you're not yet being forced to look too closely. So people will say things like "I'm an emotional person" in order to be able to ignore all these arguments these "rationalists" are making, which usually end with "This is why you should change your life and donate". Or they adopt a self-image as someone who is "just not into that philosophy-stuff" and thus will just not bother to think about it anymore once the discussions get too far.
LW or EA discourse breaks down the alternatives. Once it's too late, once your brain spots blatant attempts at rationalizing, this forces people to either self-identify as (effective) altruists or not, or at least state what %age of your utility function corresponds to which. And self-identifying as someone who really doesn't care about people far away, as opposed to someone who still cares but "community comes first" and "money often doesn't reach its destination anyway" and "isn't it all so uncertain and stop with these unrealistic thought experiments already!" and "why are these EAs so dogmatic?", is usually much harder. (At least for those who are empathetic/social/altruistic, or those who are in search of moral meaning to their lives).
I suspect that this is why rationality doesn't correlate with making people happier. It's easier to be happy if your goal is to do alright in life and not be an asshole. It gets harder if your goal is to help fix this whole mess that includes wild animals suffering and worries about the fate of the galaxy.
Arguably, people are being quite rational, on an intuitive level, by not being able to tell you what their precise consequentialist goal is. They're satisficing, and it's all good for them, so why make things more complicated? A heretic could ask: Why create a billion new ways in which they can fail to reach their goals? – Maybe the best thing is to just never think about goals that are hard to reach. Edit: Just to be clear, I'm not saying people shouldn't have consequentialist goals, I'm just pointing out that the picture as I understand it is kind of messy.
Handling the inherent demandingness of consequentialist goals is a big challenge imo, for EAs themselves as well as for making the movement more broadly appealing. I have written some thoughts on this here.