So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).
This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.
Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.
On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.
The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.
The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.
So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, and can come away thinking “any of these are high impact”, when the more correct view, taking into account the power-law distribution, would be more like “any of these could be the most impactful intervention, but we don’t know which one yet. After doing some reflection on myself and the evidence, I think problem area X is likely to be the most impactful or most important.” [4]
There is no one who has done your hard cognitive work for you. You still have to think about which things you think will lead to high impact, and which things you are a good personal fit for.
Thanks to Sam and Conor for feedback.
I’d be interested to hear if you think I’m overstating how common this trap might be.
For example issues regarding deferring, personal fit, and probably more. ↩︎
Now there's an ominous sentence if I've ever seen one. ↩︎
You can of course have meta-beliefs about your expected posterior beliefs about the distribution of impact (that it will be power-law distributed), but not about the position of any single intervention/cause area in that distribution. ↩︎
Yes I am sneaking in here a transformation from “this area/intervention is the most impactful” to “I can do my most impactful work in this area/intervention”, but I don’t think that is substantial. ↩︎
I love the point about the dangers of "can't go wrong" style reasoning. I think we're used to giving advice like this to friends when they're stressing out about a choice that is relatively low-stakes, even like "which of these all-pretty-decent jobs [in not-super-high-impact areas] should I take." It's true that for the person getting the advice, all the jobs would probably be fine, but the intuition doesn't work when the stakes for others are very high. Impact is likely so heavy-tailed that even if you're doing a job at the 99th percentile of your options, it's probably (?) orders of magnitude worse than the 99.9th percentile — meaning you're giving up more than 90% of your impact.
A corollary is that different roles and projects within each cause area are also likely to be heavy-tailed, and once again, I hear the advice of "can't go wrong" in pretty inappropriate contexts. Picking the second-best option likely means giving up most of your impact, which is measured in expected lives saved. You can definitely go wrong!
Now, we all have limited cognition and for these kinds of choices; we ultimately have to make choices (and doing nothing is also a choice), and we'll inevitably make mistakes, and we should treat ourselves with some compassion. But maybe we should reframe comments like "you can't go wrong" as like, "sounds like you have some really exciting options and difficult choices ahead" — and, if you have the bandwidth to actually do this — "let me know if I can help you think them through!"