Hide table of contents

I’d like to raise an aspect of effective altruism that I personally find difficult. My hope is that this will promote discussion and help others who feel similarly.

I want to preface this by saying I am incredibly glad that I got involved in the effective altruism community, and that I think overall my life is a lot better because of it. I feel like I have a much greater sense of purpose and belief that I can do good in the world, I have an incredible network of friends with similar values and interests, and a great deal more ambition. Plus I’m actually doing more to make the world better, of course.

But there’s also something about EA that causes me a distress I didn’t have before.

The whole point of effective altruism is that we don’t just want to do good; we want to do the most good possible. But figuring out the “best possible” thing for any person to do is incredibly difficult. And a strong desire to do the best possible thing coupled with a huge amount of uncertainty is a recipe for dissatisfaction.

The first part of the problem is that it’s easy to put a lot of pressure on yourself as an effective altruist.

I personally spend a lot of time thinking about whether what I’m doing right now really is the best thing, or whether I could be doing something better. And whilst some doubt and scepticism is definitely useful, I can’t help but think I’m doubting myself more than is useful. It seems quite likely I would be happier and more effective in what I’m doing if I doubted myself less. At the very least, I’d simply be doing more things. I spend far too much of my time in analysis paralysis.

To give a concrete example: I’ve just finished my first year of a PhD program. And I’ve spent a good proportion of that first year agonising with myself over the question: “What should my thesis topic be?”. Every time someone suggested a topic or I thought about narrowing down on an area a little voice in my head would go “But what if that’s not the best thing for you to research? What if this other thing was better?”. You can see how this kind of thinking might start to drive you crazy, and how it might also lead me to feel quite unmotivated to work on my PhD.

Part two of the problem is that it’s also easy to end up putting a lot of pressure on each other.

One of the key parts of effective altruism is that we don’t take claims of do-gooding at face value: just because something sounds like it does good, or produces a warm glow, doesn’t mean that it’s actually doing good. So every time we hear about some kind of altruistic endeavour, we automatically think, “But is that really effective?”. This also happens when we talk to each other about our plans. When I’m telling another effective altruist about what I’m working on or what my long-term plan is, there’s often that niggling voice in the back of my head going, “I wonder if they’re thinking this isn’t very effective and judging me....”. This can also make us appear unwelcoming to people who are new to the ideas of effective altruism. Unfortunately scepticism and warmth just don’t seem to be that well correlated.

I don’t think I’m alone in worrying about this - I’ve spoken to a number of other effective altruists who experience something similar. It seems like quite a few people worry about not being “effective enough” or “not doing enough good” more than is helpful. And I think this is actually a pretty big problem. It may be stopping people from making as much difference as they could. It may also be preventing us from being as warm and encouraging towards each other as we could be, especially towards people new to the ideas of effective altruism.

So what can we do about this?

The crux of the problem seems to be that it’s difficult to be sceptical and supportive: both of other people, and of ourselves. But I don’t think it’s an inevitable consequence of being sceptical that we’ll end up putting pressure on ourselves and others.

I think one thing that’s useful in itself is just for people to admit that they feel this pressure sometimes. Hearing that other people feel the way I do - that other people worry about whether they’re doing the best thing, and worry about what others think of them - helped me to realise that it wasn’t just me, and accept that what I was feeling was natural. That’s a big part of the reason I’m writing this post: so people who feel this way, even a little bit, can realise that it may be a pretty common feeling. I’d be really interested to hear from people the extent to which they can identify with what they described, to know how common it is. Obviously if you don’t feel this way, that’s great - maybe you have some great strategies or ways of thinking those of us who do can use!

The next step is for us to just make sure we are being kind and encouraging - both towards each other and ourselves. We could also reward this behaviour so it becomes more of a community norm. I find actually trying to change my patterns of thought really useful - it’s so easy to get into the habit of thinking hypercritically every time you hear a claim. Now, when I hear a new idea from someone, I try to think about what it’s virtues might be instead of/as well as thinking of ways it might be improved (note: improved, not wrong.) Mental habits are relatively malleable if you practice.

To conclude:

Being kind and being sceptical aren't mutually exclusive, but it's easy to end up feeling like they are. When we think of incredibly warm and kind people, they tend to be the kind of people who will say any idea is wonderful. Of course, this is exactly what effective altruism is trying to oppose: we don't want to lose our habits of rigorously examining arguments and evidence. But just as important is ensuring we don't go too far in the other direction; that we don't become cold towards each other and ourselves. If we can't suspend our scepticism for long enough to make decisions, or listen to someone else's plans without immediately writing them off, then we're unlikely to get very far.

Sorted by Click to highlight new comments since:

Thanks for this post, Jess! I agree that these are important issues.

Personally, for uncertainty and decision anxiety, there are a couple of ideas that actually do help me stop that cycle when I think of them:

1. Hard decisions are (usually) the least important- "Do I want to eat or do I want to slam my head against that wall?" is an easy decision because there is a big difference between them. "What should I order off this menu?" is a harder decision because they have almost exact expected payoffs (eating delicious food. Anything on the menu should be good) and therefore it doesn't really matter what you decide.

2. Do I need to optimize this decision? (Answer: probably not.)- Recognizing that trying to optimize most your decisions means you wouldn't ever actually get anything done, and so deciding to be explicitly okay with just satisficing most your decisions. When I catch myself putting too much effort into a decision that I haven't explicitly decided to optimize I'll say out loud something like "This doesn't actually really matter." and that tend to help me make a "good enough" decision and move on.

3. I will be happier once I've made my decision and locked it in- studies have shown that if you have the ability to change your decision that you will be less satisfied with it than if you were locked in to it. Also making decisions is stressful so leaving them open is going to make you less happy. So picking a dress at a store that doesn't do returns is better (by which I mean you will both like the dress more and be generally happier) than buying a dress with the idea that you can come back and exchange it for another dress if you decide to later is better than buying both dresses with the thought that you will return the one you like last later.

Hard decisions are (usually) the least important

A decision can be hard because the possible outcomes are finely balanced in expected payoff, or because you are quite lacking in knowledge about the possibly outcomes and/or their likelihood. If it's the latter then it can be hard and matter a lot! For effective altruists there can be a bit of both. "Should I buy this pen or this other one? A better pen might help me write more effectively!" is probably the former, but "What career should I choose?" is probably the latter.

Plus, the latter kind of decision holds the promise of high value of information. If only you devote a bit more time to thinking about it or researching, you might improve your estimates a lot (or not). So that's another incentive to worry about and delay such a decision.

I totally agree, Michael!

There are also decisions that are: hard, important, you don't have enough information, AND the cost of getting more information is too high. Especially if you did this thought experiment: If I tried to optimize every decisions of a similar level of importance as this one, how much would I actually accomplish?

Even for career decisions, once you've narrowed it down to a handful that meet your criteria, there needs to come a time when you just pick one and run with it. Especially considering that a lot of the information that is very important is also very hard to get (It's hard to know how good of a fit you can be for a job until you've actually done it for a while)

Great points Erica, thanks! I've been using very similar ways of thinking recently, actually, and it's helped a lot.

One thing I've found, though, is that it's easy to reflectively know that all of these points are true, but still not believe them on an emotional level, and so still find it difficult to make decisions. I think the main thing that's helped me here is just time and persistence - I'm gradually coming to believe these things on a more gut level the more times I do just make a decision, even though I'm not certain, and it turns out ok. I think this is a classic situation where your system 2 can believe something, but your system 1 needs repeated experience of the thing actually happening - decisions you're not certain of turning out ok, in this case - to really internalise it.

Great post Jess! I agree, it's so important that we give each other positive feedback as well as criticism (constructive or otherwise). I suspect sometimes we feel it's tougher, stronger or somehow virtuous not to need support, but for many people the need or desire for peer approval and support is a real one.

Thanks Dette :)

I suspect sometimes we feel it's tougher, stronger or somehow virtuous not to need support

Yeah, agree. I think the solution to this is just for more people to stand up and admit they need support, and for us to reward those people for doing so, so that it becomes more socially acceptable. This can be hard to do though, of course. But it's easy to forget that everyone is trying to project their most confident image, and that we may not always be as confident as we try to project!

Is hiding emotional struggle to appear stronger considered more valuable in effective altruist communities? I hope not. I mean, it's valued in all sorts of communities, so I understand if by some wacky process it became an implicit norm without anybody noticing. However, I don't believe it's conducive to what Jess' original post is getting across.

Not letting our emotions sway our decisions about, e.g., cost-effectiveness is one thing. However, 'supportive skepticism' is on the other side of being an effective altruist: personal motivation. If people feel mentally paralyzed, then the effective thing to do for the heart is to be more supportive of each other. Being ashamed of some self-doubt shouldn't be normalized in this community. I would find it odd to discover an effective altruist who never reconsidered their decisions.

I'm going to normalize being open with each other by expressing my need for support, and praising others for sharing.

Thanks for writing this, Jess!

One approach for dealing with decision paralysis that I've found helpful is to proceed by making a series of concrete pairwise comparisons rather than by trying to compare several different alternatives simultaneously. The goal is to first identify an option that one is at least minimally satisfied with, and then compare that default to some alternative. If, and only if, one concludes that the alternative is superior to the original, this alternative becomes the new default against which future alternatives are compared. One then repeats this process as many times as necessary to deal with the options one hasn't yet evaluated.

(I'm not claiming any originality here; the approach seems so obvious that there's probably a name for it. Yet it is often overlooked, so it seemed worth mentioning.)

Yeah, agree that this is a simple but useful idea!

One concern I would have with this in some situations is that it might cause you to anchor on your initial option too much - you might miss some good alternatives because you're just looking for things that most easily come to mind as comparable to your first option. But I don't know how often this would actually be a problem.

I've come across a development of this technique in a management decision making course. It's known as even-swaps and it's helpful to choose among hard-to-compare options. However, Jess' comment below correctly picks on one of the downsides of such approach: our pairing of choices may not have a neutral effect on the process.

One factor at play here is that the “best possible” thing to do isn't just incredibly difficult to figure out, as you rightly say, but also an extremely (indeed impossibly) high standard. It's human nature to have many different motivations, drives and values (in a broad, loose sense of 'values') besides purely altruistic ones. Few if any EAs I know do, and I certainly don't judge them for this or want them to feel bad about it - apart from anything else, that wouldn't help anyone :)

I expect pretending otherwise would be to place counter-productive pressure on people. Of course when there are valuable actions that people can viably take it can be good to nudge them into them, but there's a balance to strike here, and it does no good to push people too far or too hard. But as you say there's definitely a way to do this that's nice, gentle and focuses on concrete improvements - after all, we're all working together towards shared ends. The attitude that common sense practice and morality would recommend taking in this plausibly embodies some wisdom, informed by an understanding of human nature built up over the centuries.

Great post Jess!

It seems to me as if promoting effective altruism as doing the 'most good possible' exists for public relations purposes. For example, if effective altruism had a slogan that was less than doing the most good, and only, like, 'pretty good', than other altruistic endeavors could just state that they're just as pretty good, or slightly better, and than effective altruism loses its footing. I concur that for most individuals doing literally what the most good possible won't happen.

I'd re-frame our personal goal as at least 'choosing the best option of those identified, given resources, available attention, time constraints, and not straining ourselves too much'. That's not a motto Peter Singer can quip at the end of a lecture, but it's something effective altruists can keep in mind once they're on board with effective altruism.

Excellent post. I think that the importance of kindness and generosity is often underestimated. In most communities, movements or workplaces, where yoy are working together towards a shared goal, the interpersonal atmosphere is only discussed if there are serious problems (e.g. open conflicts or hostility). In the absence of that, leaders and others won't bother too much with how people behave towards each other. However, my hunch is that the positive effects of a better atmosphere do not stop at the point where there is no longer open hostility. On the contrary, movements where people are kind and encouraging in the way you suggest we should be are, I would guess, more effective than those where there is merely an absence of open hostility.

The following quote from The Economist sheds some light on these matters:

Condor works by sifting through data from Twitter, Facebook and other social media, and using them to predict how a public protest will evolve. It does so by performing what Dr Gloor calls “sentiment analysis” on the data.

Sentiment analysis first classifies protesters by their clout. An influential Twitter user, for instance, is one who has many followers but follows few people himself. His tweets are typically upbeat (containing words or phrases such as “great”, “fun”, “funny”, “good time”, “hilarious movie”, “you'll love” and so forth), are rapidly retweeted, and appear to sway others. In a nod to the methods developed by Google, Dr Gloor refers to this process as “PageRanking for people”.

Having thus ranked protesters, Condor then follows those at the top of the list to see how their output changes. Dr Gloor has found that, in Western countries at least, non-violent protest movements begin to burn out when the upbeat tweets turn negative, with “not”, “never”, “lame”, “I hate”, “idiot” and so on becoming more frequent. Abundant complaints about idiots in the government or in an ideologically opposed group are a good signal of a movement's decline. Complaints about idiots in one's own movement or such infelicities as the theft of beer by a fellow demonstrator suggest the whole thing is almost over.

Perhaps we could let Condor analyze this forum, to see if we use a sufficient number of upbeat phrases...

Nice quote, and very relevant - thanks for sharing! A general worry is that EA is often framed as inherently critical - as being sceptical of typical ways of doing good, as debunking and criticising ineffective attempts at altruism etc. - and this will mean we naturally end up using a lot of negative words.

I think there's some evidence that being critical outside of a group can make people within the group feel closer to each other - which makes sense, because it strengthens the feeling of "us" versus "them." But doing this with EA seems actively harmful, both because we want to attract as many people to be part of the "group" as possible, and because it's unclear exactly where the line of the "group" lies, so we inevitably end up being critical of each other too.

Important follow-up to this post: Supportive scepticism in practice

I wonder whether another reason effective altruists might be particularly prone to these kinds of worries is the fact that it’s sufficiently broad that it isn’t even clear what you should be trying to optimise (donations? time working?), or optimise for (helping animals? helping those in the developing world?). That seems like it could be problem for feeling judged by others, and for giving yourself a hard time. An example of the former: I have a lot of effective altruists come and stay at my house, and I feel like very often I find myself worrying about what I should cook, and what they’ll think of the house – will they judge me for having meat in the house? Or if it seemed like I had expensive meat substitutes? Or if I had vegan foods that would take a long time to cook (when I could be working)? (Of course, this is mostly in my head – I haven’t had anyone actually complain about any of these.). Likewise, as to wonder what you should do, it feels s if it would be easier if there was a narrower range within which to optimise (for example, it feels much clearer to optimise for being a PhD student, because there are more specific aims, and also a clearly limited sphere).

I don’t know how we might be able to counteract this. One thing might be to just to try to be extra accepting of the fact that ‘doing good’ is going to look pretty different for different people, and try not to give ourselves or each other a hard time for that being the case.

It seems there are some common states where this comes up, such as when one person is doing a thing which they think is good, given personal constraints which are hidden to their conversation partner, and worries that they are harshly judged because the constraints are hidden. Or where one person is trying out a thing, because they think it might be very good, however they don't already think it is very good (except for VOI), and worry that others think they are actually advocating for something suboptimal. Or where one person doesn't think what they are doing is likely to be optimal, but struggles to find something actually better that they could feasibly do.

Perhaps it would be helpful if there was a thing you could say in these recognized circumstances to let your conversation partner know that you know that what you are doing doesn't look optimal, and you are already aware of the situation.

I personally spend a lot of time thinking about whether what I’m doing right now really is the best thing, or whether I could be doing something better.

I think there's an inherent problem that if you want to do the best possible thing, you'll never know that you've succeeded (you'll often no that you didn't), and therefore never get that satisfaction.

People earning to give seem to deal with this by setting goal amounts or income percentages. If they reach or exceed their goal, they've succeeded, even if they had some dollars that they could have donated but didn't.

This kind of trick is harder to use for something like choosing a thesis topic, but you could try something like it. You could have a goal that your thesis should "substantially further the goals of effective altruism" or maybe more usefully that it should relate to some aspect of EA like "international development" or "human truthseeking" that seems useful to investigate further. You can consider candidate topics that fit your goal and choose freely using any criteria you want, whether they include "looks higher value" or "looks like more fun," and as long as your topic meets your original goal you can count yourself as having succeeded.

But yeah, a supportive community also seems really important (-:

I think one thing that’s useful in itself is just for people to admit that they feel this pressure sometimes. Hearing that other people feel the way I do - that other people worry about whether they’re doing the best thing, and worry about what others think of them - helped me to realise that it wasn’t just me, and accept that what I was feeling was natural.

This is super important, but I think there's actually another failure mode to worry about: at some point people in my communities started talking more openly about feeling pressure / burn-out. This was helpful. But then, when a bunch of people burned out at one, the result was that several people I knew were frequently very negative (myself included), and this created a cycle of despair.

My tentative solution is that if a community or organization seems at high risk of that, people should take vacations and/or change their environment more, so that the negative feelings don't all get concentrated in one place.

To give a concrete example: I’ve just finished my first year of a PhD program. And I’ve spent a good proportion of that first year agonising with myself over the question: “What should my thesis topic be?”. Every time someone suggested a topic or I thought about narrowing down on an area a little voice in my head would go “But what if that’s not the best thing for you to research? What if this other thing was better?”

This sounds like the classic exploration/exploitation tradeoff, which occurs any time you're trying to optimize a system. Maybe we should apply some of the research in this area to our lives as EAs. (Although I'm not sure how practical that would be.)

I would recommend Stephen Guise's Book "How to be an Imperfectionist" http://imperfectionistblog.com/2015/01/what-is-an-imperfectionist/

Curated and popular this week
Relevant opportunities