I want to find a good thought experiment that makes us appreciate how radically uncertain we should be about the very long-term effects of some actions altruistically motivated actors might take. Some have already been proposed in the 'cluelessness' literature -- a nice overview of which is given by Tarsney et al. (2024, §3)-- but I don't find them ideal, as I'll briefly suggest. So let me propose a new one. Call it the 'Dog vs Cat' dilemma:
Say you are a philanthropy advisor reputed for being unusually good at forecasting various direct and indirect effects donations to different causes can have. You are approached by a billionaire with a deep love for companion animals who wants to donate almost all his wealth to animal shelters. He asks you whether he should donate to dog shelters around the World or to cat shelters instead.[1] Despite the relatively narrow set of options he is considering, he importantly specifies that he does not only care about the short-term effects his donation would have on cats and dogs around the World. He carefully explains and hugely emphasizes that he wants his choice to be the one that is best, all things considered (i.e., not bracketing out effects on beings other than companion animals or effects on the long-term future).[2] You think about his request and, despite your great forecasting abilities, quickly come to appreciate how impossible the task is. The number and the complexity of causal ramifications and potentially decisive flow-through effects to consider are overwhelming. It is highly implausible a donation of that size does not somehow change important aspects of the course of History in some non-negligible ways. Even if it is very indirect, it will inevitably affect many people’s attitudes towards dogs and cats, the way these people live, their values, their consumption, economic growth, technological development, human and animal population sizes, the likelihood of a third World War and the exact actors which would involved, etc. Some aspects of these effects are predictable. Many others are way too chaotic. And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters. If the billionaire picks cats over dogs, this will definitely end up making the World counterfactually better or worse, all things considered, to a significant extent. The problem is you have no idea which it is. In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.
Here's how OpenAI's image generator portrays the scene:
I have two questions for you.
1. Can you think of any reasonable objection to the strongly implied takeaway that the philanthropy advisor should be agnostic about the sign of the overall consequences of the donation, there?
2. Is that a good illustration of the motivations for cluelessness? I like it more than, e.g., Greaves' (2016) grandma-crossing-the-street example and Mogensen's (2021) 'AMF vs Make-A-Wish Foundation' one because there is no pre-established intuition that one is "obviously" better than the other (so we avoid biases). Also, it is clear in the above thought experiment that our choice matters a bunch despite our cluelessness. It's obvious that the "the future remains unchanged" (/ "ripple in the pond") objection doesn't work (see, e.g., Lenman 2000; Greaves 2016). I also find this story easy to remember. What do you think?
I also hope this thought experiment will be found interesting by some others and that me posting this may be useful beyond just me potentially getting helpful feedback on it.
- ^
For simplicity, let’s assume it can only be 100% one or the other. He cannot split between the two.
- ^
You might wonder why the billionaire only considers donating to dog or cat shelters and not to other causes given that he so crucially cares about the overall effects on the World from now till its end. Well, maybe he has special tax-deductibility benefits from donating to such shelters. Maybe his 12-year-old daughter will get mad at him if he gives to anything else. Maybe the money he wants to give is some sort a coupon that only dog and cat shelters can receive for some reason. Maybe you end up asking him why and he answers ‘none of your business!’. Anyway, this of course does not matter for the sake of the thought experiment.
With the warning that this may be unsatisfying, since this is recounting a feeling I've had historically, and I'm responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:
(that's incomplete, but I think it's the first-order bit of what seems unsatisfying)
Definitely not saying that!
Instead I'm saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren't enough to justify it, so they shouldn't do the thinking.
I don't see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).
That said, I'm not sure it's crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.
Later in your comment you use the phrase "rationally obligated". I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:
I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I'll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I'd end up (somewhere close to 50%), and think that this is a good bet to take -- rather than saying that EV somehow doesn't give me any reason to like the bet.
For what it's worth I'm often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).
I don't think I'd agree with that. Although I could see saying "yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it".