Hillary Greaves laid out a problem of "moral cluelessness" in her paper Cluelessness, http://users.ox.ac.uk/~mert2255/papers/cluelessness.pdf
Primer on cluelesness
There are some resources on this problem below, taken from the Oxford EA Fellowship materials:
(Edit: one text deprecated and redacted)
Hilary Greaves on Cluelessness, 80000 Hours podcast (25 min) https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/
If you value future people, why do you consider short-term effects? (20 min) https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term
Simplifying cluelessness (30 min) https://philiptrammell.com/static/simplifying_cluelessness.pdf
Finally there's this half hour talk of Greaves presenting her ideas around cluelessness:
https://www.youtube.com/watch?v=fySZIYi2goY
The complex cluelessness problem
Greaves has the following worry about complex cluelessness:
The cases in question have the following structure:
For some pair of actions of interest A1, A2,
- (CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;
- (CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;
- (CC3) It is unclear how to weigh up these reasons against one another.
She then uses donating bednets to poor countries as an example of this. By donating bednets, we can save lives at scale. Saving lives could increase the fertility rate, eventually leading to a higher population. There are good reasons to think that a higher population is net-negative for the long-term, or could even constitute an existential threat (CC1). On the other hand, it's entirely possible that saving lives in the short term could improve humanity's long term prospects (CC2) - perhaps a higher population now will lead to a larger number of people throughout the rest of the universe's history enjoying their lives, or perhaps the diminished human tragedy in our own century (because of lives saved) could lead to a more stable and better-educated world that better prepares for existential risk. But as I lay out below, I don't know why this would lead us to CC3.
A "set point/Control Theory" solution
This solution applies to the specific example but doesn't address the general problem.
Many dynamic systems have a way of restoring equilibria that are out of balance. In nature, overpopulation of a species in an ecosystem leads to famine, which leads to a decrease in population, and so overall, the long-run species population may not change.
For human overpopulation, if overpopulation becomes a serious problem, lower population growth now is likely to lead to fewer efforts to constrain population in the future. Conversely, higher population growth now is likely to lead to more efforts to constrain population in the future. Thus, by saving lives now (the short term), we might create a problem that is solved in the medium term, with no long-run consequences.
It may be that many processes tend towards equilibria. The key problem for a longtermist in valuing the long-term danger of an intervention may be its effect on existential risk in the next few hundred years, and medium-term consequences should be evaluated in that context.
A general Bayesian joint probability solution
Hillary Greaves gives this solution in her paper, I believe:
Just as orthodox subjective Bayesianism holds, here as elsewhere, rationality requires that an agent have well-defined credences. Thus, insofar as we are rational, each of us will simply settle, by whatever means, on her own credence function for the relevant possibilities. And once we have done that, subjective c-betterness is simply a matter of expected value with respect to whatever those credences happen to be. In this model, the subjective c-betterness facts may well vary from one agent to another (even in the absence of any differences in the evidence held by the agents in question), but there is nothing else distinctive of ‘cluelessness’ cases; in particular, (2) there is no obstacle to consequences guiding actions, and (3) there is no rational basis for decision discomfort.
To solve the malaria net problem, we can calculate the probability of things like:
- Short-run fertility meaningfully impacts long-run fertility
- Likely increase in fertility due to the malaria net intervention
- Each million of population increase will increase existential risk by x.
- Fewer deaths will yield some level of improved well-being and community resilience; the additional resilience and well-being improves long-run global education and decision-making around existential risk, lowering existential risk by x
- ...and so on
Then, we consider two scenarios:
- Donate bednets
- Do not donate bednets
For each scenario:
- Calculate the joint probability of existential risk and other long-term consequences under each of these scenarios, given these propositions. We don't need a full model of existential risk; it's enough to start with an estimate of the relationship between existential risk and relevant variables like population increase, global education, etc.
- Weight the estimated value of each action by the joint probability.
- Select the action with the highest estimated value based on the joint probability.
What am I missing?
Greaves seems to anticipate this response, as above, and goes on to say:
The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’).21 Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.
I am very confused by this turn of reasoning. I don't think I understand what she means by credence function, and imprecise credences. But I don't really understand the problem of imprecise credence, or why this is necessarily related to a 'many-membered set of probability functions'. For our malaria bednets question, we still have one probability function (you might think of that as aggregate well-being across the history of the universe, which will for our purposes can be reduced to existential risk or probability humanity becomes extinct within the next 500 years). We simply
- Take the probability distributions of each thing we are uncertain about
- Find the joint probability distribution for each of those things under each of our scenarios
- Compare the joint probability distributions to find the action with the highest expected value
and we're done! I don't see how the problem of a whole set of probability functions is inevitable, or even how we anticipate it might be a problem here.
Can anyone shed light on this?
I think you just have to make your distribution uninformative enough that reasonable differences in the weights don't change your overall conclusion. If they do, then I would concede that the solution to your specific question really is clueless. Otherwise, you can probably find a response.
Rather than thinking of directly of appropriate distribution for the 1,000,000 flips, I'd think of a distribution to model p itself. Then you can run simulations based on the distribution of p to calculate the distribution of the fraction of 1000,000 flips. p∈(0.5,1.0], and then we need to select a distribution for p over that range.
There is no one correct probability distribution for p because any probability is just an expression of our belief, so you may use whatever probability distribution genuinely reflects your prior belief. A uniform distribution is a reasonable start. Perhaps you really are clueless about p, in which case, yes, there's a certain amount of subjectivity about your choice. But prior beliefs are always inherently subjective, because they simply describe your belief about the state of the world as you know it now. The fact you might have to select a distribution, or set of distributions with some weighted average, is merely an expression of your uncertainty. This in itself, I think, doesn't stop you from trying to estimate the result.
I think this expresses within Bayesian terms the philosophical idea that we can only make moral choices based on information available at the time; one can't be held morally responsible for mistakes made on the basis of the information we didn't have.
Perhaps you disagree with me that a uniform distribution is the best choice. You reason thus: "we have some idea about the properties of coins in general. It's difficult to make a coin that is 100% biased towards heads. So that seems unlikely". So we could pick a distribution that better reflects your prior belief. Perhaps a suitable choice might be Beta(2,2) with a truncation at 0.5, which will give the greatest likelihood of p just above 0.5, and a declining likelihood down to 1.0.
Maybe you and i just can't agree after all that there is still no consistent and reasonable prior choice you can make, and not even any compromise. And let's say we both run simulations using our own priors and find entirely different results and we can't agree on any suitable weighting between them. In that case, yes, I can see you have cluelessness. I don't think it follows that, if we went through the same process for estimating the longtermist moral worth of malaria bednet distribution, we must have intractable complex cluelessness about specific problems like malaria bednet distribution. I think I can admit that perhaps, right now, in our current belief state, we are genuinely clueless, but it seems that there is some work that can be done that might eliminate the cluelessness.