While longtermism is an interesting ethical principle, I believe the consequence of the extent of uncertainty involved with the impact of current decisions on future outcomes has not been fully explored. Specifically, while the expected value may seem reasonable, the magnitude of uncertainty is likely to dwarf it. I wrote a post on it and as far as I can tell, I have not seen a good argument addressing these issues.
https://medium.com/@venky.physics/the-fundamental-problem-with-longtermism-33c9cfbbe7a5
To be clear, I understand the argument of risk-reward tradeoff and how one is often irrationally risk-averse but I am not talking about that here.
One way to think of this is the following: if the impact of an intervention at present to influence long term future is characterized as a random variable X(t) , then, while the expectation value could be positive:
the standard deviation as a measure of uncertainty ,
could be so large that the coefficient of variation is very small:
Further if the probability of a large downside, is not negligible, where , then I don't think that the intervention is very effective.
Perhaps I have missed something here or there have been some good arguments against this perspective that I am not aware. I'd happy to hear about these.
This seems to be an issue of only considering one side of the possibility distribution. I think it’s very arguable that a post-nuclear-holocaust society is just as if not more likely to be more racist/sexist, more violent or suspicious of others, more cruel to animals (if only because our progress in e.g., lab-grown meat will be undone), etc. in the long term. This is especially the case if history just keeps going through cycles of civilizational collapse and rebuilding—in which case we might have to suffer for hundreds of thousands of years (and subject animals to that many more years of cruelty) until we finally develop a civilization that is capable of maximizing human/sentient flourishing (assuming we don’t go extinct!)
You cite the example of post-WW2 peace, but I don’t think it’s that simple:
there were many wars afterwards (e.g., the Korean War, Vietnam), they just weren’t all as global in scale. Thus, WW2 may have been more of a peak outlier at a unique moment in history.
It’s entirely possible WW2 could have led to another, even worse war—we just got lucky. (consider how people thought WW1 would be the war to end all wars because of its brutality, only for WW2 to follow a few decades later)
Inventions such as nuclear weapons, the strengthening of the international system in terms of trade and diplomacy, the disenchantment with fascism/totalitarianism (with the exception of communism), and a variety of other factors seemed to have helped to prevent a WW3; the brutality of WW2 was not the only factor.
Ultimately, I still consider that the argument that seemingly horrible things like nuclear holocausts (or The Holocaust) or world wars are more likely to produce good outcomes in the long term just generally seems improbable. (I just wish someone who is more familiar with longtermism would contribute)