Neutrality about Longtermism and Danaë’s Additions
In this essay I will respond to the chapter “Longtermism and Neutrality about More Lives” by Katie Steele which forms part of the book “Essays on Longtermism: Present Action for the Distant Future” recently published by the Oxford press. I am a Philosophy and Psychology student, and this will be the outline of my essay, because responding with what I have learned and enjoy makes sense. Additionally, I continued thoughts and gave statements details and examples.
“It is neither better nor worse for there to be no bearer of welfare than for there to be one with positive welfare.” -Katie Steele
Psychological and neuroscientific approach:
The first statement I will respond to, is a premise about the nature of our decision makers welfarist reason to reduce the risks of futuristic threads. And it is these natures that Neutrality is a part of. The nature therefore is the way of thinking, like a value, something to be guided from when it comes to making decisions.
And then there is the other “nature” of our decision making, which is neuroscientific explainable. The Prefrontal Cortex is teaching us patience and strategic thinking, and it collaborates, like every part of the brain does with each other, with the limbic system, responsible for emotions and motivations. These parts are conscious and there are some scientists who believe, that when it comes to decision-making, the unconscious part of the brain plays the more important role. This can be based on numbers. The unconscious part processes 1,2 million frames of information, while the conscious part processes forty frames of information, both in the same single second. I cannot prove this to be true, but I can use it to further conclude a possible nature of decision making. Because if this is true, then human’s decision-making does not rely on a value system, which is not even located in a specific region of the brain, and the decision is made unconsciously and only rationalised by the conscious cortexes, to confirm the decision already reached. At this point it is important to mention that consciousness is visible and can be detected.
The Anterior Cingulate Cortex is another important part in the brain, where emotion and knowledge come together to evaluate the outcomes of a decision, that is made voluntarily. This should be easy, and the ACC should prevent humans from deciding unreasonably, if, and only if, the unconscious 1.2 million fragments of information humans process per second do not influence this decision-making process, as humans would like to think. Therefore, the actual “nature” of our decision makers decision-making can be hidden in unconsciousness and we cannot argue with it, or use it as a premise, that the decision maker has strong welfarist reason to prevent a premature human extinction, for example.
Philosophical Approach:
Another passage from the text is thought-provoking. It is the premise, that acting now to prevent futuristic threads firstly has to work, and secondly bring overall lager benefit than any alternative immediate action concerning the present would do.
If humans act now to prevent futuristic threads, then futuristic threads will be avoided.
If humans act now to prevent futuristic threads, it will bring larger benefit than any alternative immediate action concerning the present would do.
Therefore: Acting now to prevent futuristic threats is the right thing to do.
This argument is not conceptually valid, because I can imagine a world, where avoiding futuristic threats, and bringing larger benefit than any alternative immediate action concerning the present would do, is not “the right thing to do.” The definition of the “right thing” is generally undefinable. Both premises can be either true or false and are therefore contingent. So there is a case, where both premises are true and the conclusion is not and this can be understood as Neutrality, where preventing a premature human extinction, for example, which is one possible futuristic threat, is neither good nor bad, when it only concerns the number of existing lives (see quote above).
My own addition:
Now I would like to continue this argument and think of examples, where the latter premise could be the case. To imagine a scenario, we need to look at the present problems, that by solving, bring welfare to the present population. The first things that come to mind are hunger, war, and currently unpreventable diseases. I would like to further specialise in the potential actions we could take. Hunger is a problem in Burkina Faso and many more countries, where the most are located in Africa. Consider the decision maker of Africa and let us consider his options. Investing to reduce futuristic threads, or investing to reduce hunger in his countries. To see the conflict, we might have to even specialise how he could invest into the future population. Because investing in a population, that will live in the currently hungry countries, means investing for them to not having to suffer from a non-welfarist life, below zero on the welfarist range. But to reduce the hunger in the future, he has to take action right now, to reduce hunger in the current population and invest in sustainable ways, from which both current and future population will benefit. But from a different perspective, for example the decision maker of Europe, does it change? If we and this chapter talk about “overall”, does overall not mean globally? And if it does, then it does not matter for what part of the world the decision maker is responsible, because investing in Burkina Faso’s reduction of hunger, increases overall welfare. But the premise stated, that the future benefit should be greater than the present one. In terms of present problems, like hunger, reducing them in more than one part of the world would be transformative for perhaps half of the world population. At this point, we remember the comparison of numbers. The entire population of the present is incommensurable small to the many futuristic populations, there will be. But is it possible to invest in future hunger decrease, without affecting the current hunger in the world? The contra argument is, that both decision makers could invest in climate change, as an example for a futuristic threat prevention, which does not necessarily affect the current population.
The second destructive problem, the decision maker, of for example Europe, could invest in, would be war. War has a direct link to the future population, since the destruction of many leads to their potential future children not being born.
And lastly the medicine against fatal diseases is not only improving welfare in the world, but also increasing the population. By not doing so, the exaggerated consequence would be a smaller population now with lower welfare, and instead, after taking action to prevent far futurist threats and the premature human extinction, there will be a same sized or bigger population in the far future with better welfare. It seems like the latter option is the more reasonable for a decision maker, but Neutrality argues differently.
Neutrality argues, that more worthwhile lives is neither better nor worse but neutral and that the extra lives do not make the world a better place. Neutrality does not argue against a future population, but that the premature extinction is not better or worse than no premature extinction, as long, as the population is incommensurable with having no one on earth at all, in that it only differs in containing extra lives. But the range of welfare is unbounded from above, Neutrality goes as high as any good life could be, which means, as explained in the text, the added lives to the original population could have had any positive welfare whatsoever and the resulting augment population would still count as incommensurable with the original population, which is the contrast to a totalist view of longtermism, and also a contrast to the moderate interpretation of longtermism, where the upper bound is not too much higher than zero. What would that mean? It would differ from Totalism still, in the lives needing a larger amount of welfare, to count as a good and overall welfare-increasing life. But applying this moderate interpretation is impossible, except if we could measure or assume the value - above or below zero on the welfare range - of any life in the far future. Neutrality therefore gives us, and the decision makers, no reason to act for the benefit of lives, that are neither good nor bad for the world, but it also does not give us reasons to increase overall welfare of the present population. How it could be interpreted, is as a responsibility not to bring into existence a person with a bad life, because the welfare would be below zero and this would decrease the overall welfare, and this is called Proactive asymmetry. This could be interpreted as the action we should take, that most efficiently prevents the overall welfare to be belove zero and therefore to be neutral. But because we cannot know the value of welfare we bring into existence, we would prevent it by not bringing into existence any life at all, and in a neutrality perspective, this would not be worse than having a future population with a welfare above zero. So maybe this is what neutrality suggests.
Now we stand in front of the conclusion that neutrality is neither interested in the quality of the futuristic population, in form of a value lock-in that could be prevented, nor in the number of lives and in there being a population at all, both seen as futuristic threats in longtermism.
