Hide table of contents

A familiar pattern: EA organizations promote charities that help people in the developing world. A critic accuses EA of forcing people to be rationality robots. EA defends the use of rationality in altruistic decisions. But both sides miss the point: it demonstrates at best a lack of imagination and at worst coldheartedness to think that only a rationality robot would believe that African lives matter. I'm guilty of this too: it reveals my own prejudices when I think about helping people in the developing world (or livestock) as “giving from the head”, rather than “giving from the heart”. Promoting EA will require changing values, not just making people more rational.

 

People are not malfunctioning utilitarian robots

 

Frequently, EA outreach starts from the implicit assumption that, deep down, people value all lives equally. In this narrative, the reason that people don't give to GiveWell-recommended charities is Kahneman-style irrationality. For example, supposedly people have biases such as scope neglect that prevent them from implementing their consequentialist values.

 

A typical EA example is the comparison between paying for a guide dog to help a blind person in the developed world versus curing many people of blindness in the developing world. To a utilitarian, choosing the former could only result from irrationality. But it's plausible that most people aren't utilitarians and don't care very much about people in the developing world. Even in surveys of philosophers, who would be expected to be more utilitarian than the general population, only a quarter are purely consequentialist.

 

Rationality alone probably won't lead to EA

 

Some people might argue that non-utilitarians will become utilitarian if they become more rational. This argument relies implicitly on a belief in moral convergence, which is difficult to defend if one rejects moral realism, as many EAs do. These are very complex debates, which I'll discuss more in a followup post, but the idea that EAs can be created through rationality training alone should be viewed with skepticism. (This is another reason I'm skeptical of the ability of CFAR and similar organizations to have a positive effect outside of some very specific populations.)

 

A comes before E

 

In short, people can't optimize for values that they don't have. For the majority of ordinary people, who don't share the egalitarian, utilitarian-ish values of EA, “the most good you can do” is meaningless. This means that we need to start by spreading our values, before talking about implementation. Though rationality exercises won't be useful for this, countless social movements have proven that it is possible to change people's values, typically by combining various types of emotional appeals. Research into the causes of changes in values will be extremely important for the future of EA.

 

Expanding the circle of compassion

 

Instead of “the most good you can do”, a better message for some audiences may be “expanding the circle of compassion”. The idea that human culture has become more enlightened by being compassionate to those different from ourselves is catchy, emotionally appealing, and tends to approximate utilitarianism in practice. It may be particularly suited to some audiences, such as religious organizations.

 

During the holiday season, it's nice to return to the compassionate roots of effective altruism. As Julia Wise says in this excellent post, there's no shame in giving from the heart.

18

0
0

Reactions

0
0

More posts like this

Comments18
Sorted by Click to highlight new comments since:

Instead of “the most good you can do”, a better message for some audiences may be “expanding the circle of compassion”.

If anyone tests this, it'd be interesting if they reported back here, sharing how it goes.

Also, I think there are definitely some audiences for which expanding the circle of compassion is a better message to put forth than doing the most good. At this point, we're being vague. What are the different "audiences?" What determines who belongs to which one. I think tests about which messages are better for which audiences is better than testing for the assumption that one message is better in general.

Wouldn't this be difficult to test. I mean, the goal of effective altruism outreach is to reach out to thousands and convince them to join the movement. So, if someone posits it's better for us to reach out with a message of expanding the circle of compassion rather than doing the most good, it seems like we'll need to do prior reasoning about what we expect to work better. I mean, will anyone randomly assign activists to spread one or the other message, wait several years to see how convincing one message or the other was to see how it changed all sorts of hard-to-measure behaviors across hundreds of people, ensure those reached out to only receive one message and not the other, even as they may join effective altruism, and reach a conclusion?

When it reaches the scale of building a social movement, it might be beyond the scope of science. In the mean time, others might think it imperative to try building the movement or changing the values of others whether than waiting for the tests results to come back. I'm not saying it's impossible. It's just seems it would be so hard to test this, as hard as anything I can think of really trying, it might defeat the purpose. Vegan or human rights activists don't wait for tests. That hasn't stopped them from being successful. It's possible they could've been more successful, but knowing there will be some success might be better to them than expecting maybe none or maybe more.

Psychologists run experiments at that sort of scale, but they do so in controlled environments. We won't have that privilege. Maybe I'm thinking of something too grand. Maybe you're thinking of small and short-term experiments where people are exposed to one or other behavior and fill out a survey responding how much it changed their impression of helping others. I'd doubt something like that would tell us anything interesting about long-term behavior change though, which is the goal of effective altruism outreach.

I will caution people to not go the other way and be too emotional. It's true that EAs are not known for our emotional appeals, but charity in general are, and our ability to not only be different but signal our difference from conventional appeals to charity is part of, I think, how we managed to explosively grow in the first place.

Anecdatally, I'm much more successful with persuading people by being high-minded and rational and giving stylized facts than by directly appealing to their emotions.

Here are some things I've done that worked: https://www.facebook.com/groups/effective.altruists/permalink/963258230397201/?comment_id=963339963722361&comment_tracking=%7B%22tn%22%3A%22R6%22%7D

I very much agree that it is key to use emotional appeals in order to promote effective giving. I talk about this topic here and elsewhere. Here's one way to do so I found effective.

The phrase "expanding the circle of compassion" might be nice one to use, and I agree with Tom about the benefit of test-marketing it. I suspect the Unitarian Universalist religious movement would be a good target audience for that concept, for example. So might Sunday Assemblies and various humanist groups.

I do want to caution about the benefit of separating promoting effective giving from Effective Altruism. Promoting the ideas of Effective Altruism to a broad audience is very worthwhile, but we should be way of promoting the movement itself by using the phrase "expanding the circle of compassion.” Doing so has some dangers to getting newcomers into the movement who might not be value-aligned with the movement itself.

To prevent that, I suggest using the concept of "effective giving" when we do outreach to people who are not the typical head-oriented audience of EAs, to whom we use emotional appeals, content marketing strategies, etc. to promote EA ideas as opposed to the movement itself.

Thanks again for raising this point, Lila!

I agree completely that talking with people about values is the right way to go. Also, I don't think we need to try and convince them to be utilitarians or nearly-utilitarian. Stressing that all people are equal and pointing to the terrible injustice of the current situation is already powerful, and those ideas aren't distinctively utilitarian.

I think that the presented idea is to get people care about others without making them reject the idea by an appeal to rationality. The discussion on pure rationality probably not being the answer either is a great 'compromise.' (One can say that respect is paid to the persons who are thus able to argue against or that the audience is pleasantly entertained.) This step could be skipped by expressing the presumption that people want to care about others the best they can, with their spare resources, just have not come across the latest materials on this topic.

Then, the conversation can go as: ok, so we are trying to do good here [presuming shared meaning but not explaining], excellent, well who donates to charities, ok, anyone has done a thorough competition/cooperation (since charities) research, ok, by what means, we are/some philanthropes are paying a lot of money to do this research to identify the most impactful ones, in terms of cost-effectiveness. You can view them here and here. Definitely recommended, these organizations are also cooperative, so if there is an opportunity to make greater impact with the donors' funding, they will go for it.

Wow, hehe, I have questions. [Note that the pitch distracts from the concept of care/feeling of responsibility by focusing on impact but does not request people to understand utilitarianism.]

I would not suggest pitching 'expanding the circle of compassion' upfront since that is not entertaining to people but seems like some work that they should otherwise not need to do so the persons may be reluctant to implement some EA principles.

[anonymous]1
0
0

If we buy this argument there's also an important question of how we sequence outreach. Typical startup advice for example, is to start in a small market of early-adopters that you can dominate and then use that dominance to access a wider market.

It seems that EA is an extremely powerful set of ideas for mathematically/philosophically inclined, well-off, ambitious people. It may be that we should continue to target this set of early adopters for the time being and then worry about widespread value change in the future.

Also, a premise in this post is that emotional appeals are better able to achieve value change. That might be true, but it also might be false. I certainty would accept that as a given.

Good points. I agree that EA's message if often framed in a way that can seem alienating to people who don't share all its assumptions. And I agree that the people who don't share all the assumptions are not necessarily being irrational.

Some people might argue that non-utilitarians will become utilitarian if they become more rational.

FWIW, I think there's indeed a trend. Teaching rationality can be kind of a dick move (only in a specific sense) because it forces you to think about consequentialist goals and opportunity costs, which is not necessarily good for your self-image if you're not able to look back on huge accomplishments or promising future prospects. As long as your self-image as a "morally good person" is tied to common-sense morality, you can do well by just not being an asshole to the people around you. And where common-sense morality is called into question, you can always rationalize as long as you're not yet being forced to look too closely. So people will say things like "I'm an emotional person" in order to be able to ignore all these arguments these "rationalists" are making, which usually end with "This is why you should change your life and donate". Or they adopt a self-image as someone who is "just not into that philosophy-stuff" and thus will just not bother to think about it anymore once the discussions get too far.

LW or EA discourse breaks down the alternatives. Once it's too late, once your brain spots blatant attempts at rationalizing, this forces people to either self-identify as (effective) altruists or not, or at least state what %age of your utility function corresponds to which. And self-identifying as someone who really doesn't care about people far away, as opposed to someone who still cares but "community comes first" and "money often doesn't reach its destination anyway" and "isn't it all so uncertain and stop with these unrealistic thought experiments already!" and "why are these EAs so dogmatic?", is usually much harder. (At least for those who are empathetic/social/altruistic, or those who are in search of moral meaning to their lives).

I suspect that this is why rationality doesn't correlate with making people happier. It's easier to be happy if your goal is to do alright in life and not be an asshole. It gets harder if your goal is to help fix this whole mess that includes wild animals suffering and worries about the fate of the galaxy.

Arguably, people are being quite rational, on an intuitive level, by not being able to tell you what their precise consequentialist goal is. They're satisficing, and it's all good for them, so why make things more complicated? A heretic could ask: Why create a billion new ways in which they can fail to reach their goals? – Maybe the best thing is to just never think about goals that are hard to reach. Edit: Just to be clear, I'm not saying people shouldn't have consequentialist goals, I'm just pointing out that the picture as I understand it is kind of messy.

Handling the inherent demandingness of consequentialist goals is a big challenge imo, for EAs themselves as well as for making the movement more broadly appealing. I have written some thoughts on this here.

A possible response to such people is to ask (or, better, elucidate) how much more they value other lives relative to the lives of those who AMF or SCI save or help, and to then see if donating to AMF or SCI still does more good according to their non-cosmopolitan values.

I think people wouldn't be honest about how much they value people in the developing world if asked directly. Instead, they would give euphemistic responses like "charity begins at home". To elucidate how much they value people in the developing world, we can look at how much the typical person donates to developing world charities or advocates for causes relevant to the developing world. Every result indicates that they value these people very little.

We could try to shame people for letting their stated values contradict their true values. But shaming is a risky strategy, and I'm not sure how effective it would be.

[anonymous]-1
0
0

(deleted)

[This comment is no longer endorsed by its author]Reply

Not all of the developing world is in dire shape, but most charities recommended by EA organizations are in the developing world.

So far EA outreach hasn't been focused on people in the developing world. There are a number of reasons for this, both good and bad. But consider the fact that the richest 5% of Indians are poorer than the poorest 5% of Americans. I'd feel pretty uncomfortable asking the poorest 5% of Americans to donate to charity, so I suspect this is one reason that there haven't been widespread efforts to ask Indians to donate (though some EAs are Indian).

[anonymous]-1
0
0

Deleted.

[This comment is no longer endorsed by its author]Reply

Okay, I think this account is going to rage-quit, but for any witnesses, I want it to be clear that this person is obviously being disingenuous and is purposely distorting what I'm saying. It should be clear that EA is not excluding people from the developing world. The main reason we haven't done more outreach there is probably inertia - few current EAs are natives of the developing world. One reason that there hasn't been a huge impetus to recruit in the developing world is the far lower average incomes. But EAs have no intention to exclude anyone.

Lila, thanks for handling this in such a mature manner :-)

[anonymous]0
0
0

(deleted the comments )

"This means that we need to start by spreading our values, before talking about implementation."

I guess that can be summed up as putting the 'altruism before the effective'.

I brought this up a few months ago, but it seems to be a recurring theme. A lot of people, myself included, seem to be drawing attention to the problem without offering a solid solution or recommendation. I'm wondering if it would be useful to start a small task force to try and dive into this a bit deeper. Is anyone doing anything like that?

More from Lila
Curated and popular this week
Relevant opportunities