This is a rough explanation of relative ranges, a heuristic that I've found very helpful for quickly comparing two options that trade off between two dimensions. Consider the following examples of tradeoffs:
- Should we prioritize helping small animals or large animals? There are more small animals, but large animals have a higher capacity for suffering.
- Should we fund medical research on the most promising candidates across diseases, or should we focus only on the most important diseases? Broad search is more likely to lead to a successful treatment, but targeted search can lead to treatments for higher-burden diseases.
- If we fund a recurring health/consumption survey, should we fund it annually or quarterly? More rounds leads to higher-frequency information, but at a higher cost.
You could answer these questions by carefully quantifying the value of each parameter – the actual population of different types of animals, the actual welfare ranges of each species, etc. But I think more often than not, that isn't necessary. The relative range heuristic is: prioritize options based on the dimension that varies across a wider range.
In the examples above, that would mean:
- Larger animals might have more capacity for suffering, but intuitively they might have 10x more or 100x more at most. Meanwhile, small animals are 1000x or 10000x more numerous than large animals. So the scale advantage of small animals is more important, and thus we should prioritize small animals.
- The best drug candidates across all diseases could have 10x higher chances of success than the best drug candidates for specific diseases, but the highest burden diseases have 1000x higher burden than the average disease. So the burden advantage of the most important diseases is more important, and thus we should prioritize research into those diseases only.
- Higher-frequency information is valuable, but a quarterly survey is 4x more expensive than an annual survey, and its information is probably not 4x more valuable. So the cost advantage of less frequent surveys is more important, and thus we should fund the survey annually rather than quarterly.
The relative range heuristic can help you quickly make comparisons without access to data. It can also reveal when you should find data – when you don't have clear intuitions about which quantity varies more, or when someone else disagrees with your intuitions. It's also a way to make transparent why you think one option is better than another, when they are close but different.
Formalization
You shouldn't trust slippery arguments made by strangers on the internet, so let me formalize why the relative range heuristic works – and when it doesn't.
Formally, imagine the value of option A is and the value of option B is Here, is the criterion that A is better on, is the criterion that B is better on, and is the aggregate of all other criteria that A and B are identical on.
Then A is better than B if and only if
Or in other words
The left side is the relative advantage that A has on criterion X, and the right side is the relative advantage that B has on criterion Y. If you have specific numbers for these ratios, by all means use them! But the relative range heuristic plugs in the intuitive range of X into the left side, and the intuitive range of Y into the right side.
You can theoretically use this heuristic when two options vary on more than two dimensions. Imagine that A is better on X and Z dimensions, while B is better on Y dimension. Then the comparison is now
So the relative range heuristic is now that A is better than B if X * Z has a wider range of variation than Y. But this is rarely clean to intuit, so I wouldn't bother with it.
There are more probabilities in heaven and earth than are dreamt of in your philosophy
I think a lot of arguments in favor of low-probability but high-impact interventions are best understood through the relative range heuristic. When someone argues for an intervention with low probability but high impact, I think their implicit argument is "interventions vary by 10x in their probability of impact, but by 100x or 1000x in their magnitude of impact. Thus, we should prioritize based on magnitude of impact."
This could often be true. But I worry that it exploits a cognitive bias in how we imagine low probabilities. It is very hard for us to imagine very low probabilities. "X is very unlikely to occur" could mean 1% chance, 0.1% chance, 0.000000001% chance – but all of these get blurred together in our heads. As a result, we can't really imagine probabilities varying by 1000x or 10000x. In contrast, the world happily tosses very large ranges of magnitude at us. Compare the range of variation in GDP, in populations of different species, in the burdens of diseases. This explains why our estimates of expected value are dominated by high-magnitude options rather than high-probability options.
Whether you think this is a problem or not depends on whether you think that probabilities really do range all that much for real-world considerations. If they do range widely, but our mind distorts that effect, then we should be much more careful of range distortion when evaluating low-probability high-impact opportunities. If real probabilities are truly constrained to a small range, then the arguments are reasonable.
I think it's a useful criterion and have upvoted it, though I have a couple of criticisms. The main one is that I'm concerned that EA has a heuristics addiction (ITN, extinction risk treated as though it were the only concern in existential risk, the prioritisation of ivory league and other universities, orthogonality thesis as an argument for the likelihood of bad AI outcomes, formulaic approaches to events, generally relying on single research projects by a small team or individual to settle a difficult question), so while having useful tools is good in theory, in practice I worry that there's little middle ground between 'ignored entirely' and 'sees widespread adoption that encourages more lazy thinking'. I'm not sure what to do about this
The other criticism is an object level application of this concern:
I think there's huge uncertainty in
So if we understand 'should prioritise' as 'should plan to do substantial research and not treat this as a settled question until we have way more data on such concerns, but start with smaller animals and perhaps lean towards small-animal-favouring decisions in a personal capacity' then this seems reasonable. But if this meant to be a serious justification for anything more substantial, then it seems like an example of overapplying the research efforts of a small team - whose work I like to be clear, but is nowhere near the last word on the subject.
Great post, Karthik! Strongly upvoted.
I think scope neglect respecting the variation of small factors (like probabilities) results from these very often being subjective guesses. In constrat, methods used to assess large factors are very often scope sensitive. For example, longtermists typically come up with huge amounts of potential benefits (e.g. 10^50 QALY) based on the physical properties of the universe, but then independently guess an increase in the probability of the potential benefits materialising which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). I think this does not work because the small factors are not independent from the large factors. In particulat, I believe the small factors get smaller as the large factors get larger. Solving half of the problem is harder (more costly) for larger problems.
One can see the expected benefits coming from large benefits may be negligible modelling the benefits as a distribution. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to "benefits"^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to "benefits"*"benefits"^-(1 + alpha) = "benefits"^-alpha. This decreases with benefits, so the expected benefits coming from astronomically large benefits will be negligible.
Very interesting post! I really enjoy simple, quantitative models of difficult-to-grasp problems. I have one question and one suggestion.
Is the claim here that X and Y are the only variables with respect to which A and B differ, and that they share the same value for all other variables, which are multiplied together to equal c? That would mean that this model only represents an all-else-being-equal case, right? To Arepo's comment, I think this all-else-being-equal model is a good starting place but not the final word e.g. doesn't capture flow-through effects.
If we're applying the given formula as written, you get a weird result when you want to minimize one variable and maximize the other. Using the survey example, say A is an annual survey, B is a quarterly survey, X is annual survey cost, and Y is data quality (indicated by total citations of all surveys conducted over a year). When we plug in some toy-values, we get the following
XA/XB>YB/YA
$1000/$4000>22/10
0.25>2.20
which isn't true even though the range of X is larger than Y. A representation that captures this scenario could be
|log(XA/XB)|>|log(YB/YA)|
In this case, instead of 0.25>2.20 for the above example, you get 0.60>0.34, which is true.
Disclaimer: I'm a chemist, not a mathematician. Please take my math with a grain of salt!
Executive summary: In this exploratory post, Karthik introduces the "relative range heuristic"—a simple decision-making tool that suggests prioritizing the dimension that varies most widely across options, especially when full quantification is impractical; he explains its rationale, formal structure, and potential limitations when applied to real-world tradeoffs.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.