Introduction
As a community builder, I sometimes get into conversations with EA-skeptics that aren't going to sway the person I'm talking to. The Tree of Questions is a tool I use to be more sure of having effective conversations, faster identifying the crux. Much of this is inspired by Scott Alexander's "tower of assumptions" and Benjamin Todd's ideas of The Core of EA.
The Tree
- a trunk with core ideas almost all EAs accept - and without which you have to jump through some very specific hoops in order to agree with standard EA stances.
- Branches for different cause areas or ideas within EA, such as longtermism. If you reject the trunk, there’s no point debating branches.
All too often, I find people cutting off a branch of this tree, and then believing they've cut down the entire thing. "I'm not an EA because I don't believe in x-risk," is an example. Deciding what assumptions you have to agree with in order to be on those branches is a job for people more knowledgeable about the philosophy behind them. What I present here is questions I ask to know whether someone can even get up the trunk - if they can't, then it's meaningless to help them reach for the branches.
This post is focused on the kinds of conversations where there is some cost to debating. It could be the social cost of yapping too much at a family dinner, it could be the risk of seeming pushy to a friend who's skeptical, or just that you're tabling and you could be talking to someone else. That's why I've listed a few points that I think aren't worth the time/effort to argue against, if someone raises them as objections to the trunk of the EA tree. I'll also list some bad counterarguments that you should practice countering. These are all real examples from my experience in EA Lund (Sweden), so I'm interested to hear from you in the comments if your mileage varies.
Altruism
The first part of the trunk is to ask "do you care about helping others?" This is actually the first words I say to people when tabling, and I think it's important to frame it in this very normal, easy-to-grasp way. I've heard people talk about EA as maximizing or optimizing, but this is much less attractive and often carries negative connotations.
Concede:
- Narrow moral circles. Some only want to help their family, city, or religious group. One person even answered "this is college man, I'm just here to party!"
- Self-reliance/non-interventionism. This could be based on the empirical claim that intervening makes things worse, or on the moral claim that it's valuable for people to help themselves. You can get away with one followup question here if you find their argument particularly unsound, but I haven't found them convinced even if I can show that it's a bad policy.
Debate briefly:
- "Altruism is self-serving/virtue signalling." This is a non-sequitur; I asked if you wanted to help others, not why people want to help others.
- "Giving to one means I have to give to everyone." This is a classic slippery slope, and I trust you to convince them that it's ok to only do what is feasible for you.
Effectiveness
The second question is "is helping a lot better than helping a little?" Saying no to this means effectiveness isn't interesting, and (most likely) neither is EA. I rarely ask this directly because it begs the questions of what "a lot" means, but I do give examples of differences in cost effectiveness and gauge their reaction.
Concede:
- "I’m content as long as I do some good every now and then." I think this one is especially important to be respectful about, so you don't come across as pushy. I want to flag that I'm afraid many are put off by EA being demanding already, so that personal fear makes me extra unwilling to argue against this objection.
- Negative vs positive obligations. Some consider it more important for their own CO2 emissions to be low than for the global ones to be. The focus is on them not doing harm, rather than no harm being done - contrary to what most EAs believe.
Debate briefly:
- Uncertainty about others' preferences. While true in one sense, you know for sure that no one wants their child dying from malaria, to be tortured, or to see our species go extinct. This is the level of problems EA operates on, so they might still be interested.
- Worries about burnout from trying too hard. This is the flip side of EA being as a community demanding to some. You can make big wins here by saying clearly that we'd happily help them avoid trying to hard, while still doing something. You can refer to research showing that do-gooders are happier, if they're amenable to that.
Comparability
Can we quantitatively compare how much good different actions do?" This is often snuck in with the Effectiveness question, because a comparison has already been made when we're comparing a lot to a little. However, I find it important to be attentive to when someone's turned off by the idea of quantifying altruism.
Concede:
- "I don’t want to use imperfect metrics." QALYs are imperfect, and so are many similar metrics we use to measure our impact. We miss 2nd order effects which might dominate (e.g. The Meat-Eater Problem), and there can be errors in how they're determined empirically. This is an important conversation to have within EA, but I don't think having that be your first EA conversation is conducive to you joining. I just say something like "Absolutely—they’re imperfect, but the best tools available for now. You're welcome to join one of our meetings where we chat about this type of consideration."
- Anti-prioritarianism. You could claim that it's wrong of me to only give one of my children a banana, even if that's the only child who's hungry. Some would say I should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.
Institutional Trust
To embrace EA, you need to believe that at least some of its flagship organizations and leaders—80,000 Hours, Will MacAskill, Giving What We Can, etc.—are both well-intentioned and capable. Importantly, many skeptics leap straight to this “top of the trunk,” accusing EA groups of corruption or undue influence (e.g., “Open Philanthropy takes dirty billionaire money”).
While those concerns deserve a thoughtful debate, they should come after someone already agrees that (i) helping strangers matters, (ii) doing more good is better than doing a little, and (iii) we can meaningfully compare different interventions. In other words, don’t let institutional distrust be the very first deal-breaker—focus on the roots before you tackle the branches.
Further Discussions
There are more points central to the thought patterns in EA - expected value, longtermism, sentience considerations, population ethics - but they're not as integral to EA as the ones above. If someone rejects one of them and claims that it's why they reject EA, I'd say they've only sawed off a cluster of branches.
For a more realistic example, I talked to one person who said that they'd focus significantly on homelessness in their own city as well as homelessness in Rwanda, because it's unfair to not divide the resources. They're not doing the most good, because they find it more ethical to divide their resources.
So I think your professor's description is good, but I'm not sure it helps discuss egalitarianism/prioritarianism with laymen in their terms. When I say I'd give everything to Rwanda, I'm answering "what does the most good?" and not "what's the most fair/just?" Nonetheless I'll consider raising this response next time the objection comes up.