A familiar pattern: EA organizations promote charities that help people in the developing world. A critic accuses EA of forcing people to be rationality robots. EA defends the use of rationality in altruistic decisions. But both sides miss the point: it demonstrates at best a lack of imagination and at worst coldheartedness to think that only a rationality robot would believe that African lives matter. I'm guilty of this too: it reveals my own prejudices when I think about helping people in the developing world (or livestock) as “giving from the head”, rather than “giving from the heart”. Promoting EA will require changing values, not just making people more rational.
People are not malfunctioning utilitarian robots
Frequently, EA outreach starts from the implicit assumption that, deep down, people value all lives equally. In this narrative, the reason that people don't give to GiveWell-recommended charities is Kahneman-style irrationality. For example, supposedly people have biases such as scope neglect that prevent them from implementing their consequentialist values.
A typical EA example is the comparison between paying for a guide dog to help a blind person in the developed world versus curing many people of blindness in the developing world. To a utilitarian, choosing the former could only result from irrationality. But it's plausible that most people aren't utilitarians and don't care very much about people in the developing world. Even in surveys of philosophers, who would be expected to be more utilitarian than the general population, only a quarter are purely consequentialist.
Rationality alone probably won't lead to EA
Some people might argue that non-utilitarians will become utilitarian if they become more rational. This argument relies implicitly on a belief in moral convergence, which is difficult to defend if one rejects moral realism, as many EAs do. These are very complex debates, which I'll discuss more in a followup post, but the idea that EAs can be created through rationality training alone should be viewed with skepticism. (This is another reason I'm skeptical of the ability of CFAR and similar organizations to have a positive effect outside of some very specific populations.)
A comes before E
In short, people can't optimize for values that they don't have. For the majority of ordinary people, who don't share the egalitarian, utilitarian-ish values of EA, “the most good you can do” is meaningless. This means that we need to start by spreading our values, before talking about implementation. Though rationality exercises won't be useful for this, countless social movements have proven that it is possible to change people's values, typically by combining various types of emotional appeals. Research into the causes of changes in values will be extremely important for the future of EA.
Expanding the circle of compassion
Instead of “the most good you can do”, a better message for some audiences may be “expanding the circle of compassion”. The idea that human culture has become more enlightened by being compassionate to those different from ourselves is catchy, emotionally appealing, and tends to approximate utilitarianism in practice. It may be particularly suited to some audiences, such as religious organizations.
During the holiday season, it's nice to return to the compassionate roots of effective altruism. As Julia Wise says in this excellent post, there's no shame in giving from the heart.
I think that the presented idea is to get people care about others without making them reject the idea by an appeal to rationality. The discussion on pure rationality probably not being the answer either is a great 'compromise.' (One can say that respect is paid to the persons who are thus able to argue against or that the audience is pleasantly entertained.) This step could be skipped by expressing the presumption that people want to care about others the best they can, with their spare resources, just have not come across the latest materials on this topic.
Then, the conversation can go as: ok, so we are trying to do good here [presuming shared meaning but not explaining], excellent, well who donates to charities, ok, anyone has done a thorough competition/cooperation (since charities) research, ok, by what means, we are/some philanthropes are paying a lot of money to do this research to identify the most impactful ones, in terms of cost-effectiveness. You can view them here and here. Definitely recommended, these organizations are also cooperative, so if there is an opportunity to make greater impact with the donors' funding, they will go for it.
Wow, hehe, I have questions. [Note that the pitch distracts from the concept of care/feeling of responsibility by focusing on impact but does not request people to understand utilitarianism.]
I would not suggest pitching 'expanding the circle of compassion' upfront since that is not entertaining to people but seems like some work that they should otherwise not need to do so the persons may be reluctant to implement some EA principles.