Dear Stan.
I think there are issues with this analysis. As it stands, it presents a model of nuclear winter if firestorms are unlikely in a future large scale nuclear conflict. That would be an optimistic take, and does not seem to be supported by the evidence:
In addition, there are points to raise on the distribution of detonations - which seems very skewed towards the lower end for a future nuclear conflict between great powers with thousands of weapons in play and strong game theoretic reasons to “use or lose” much of their arsenals. However, we commented on that in your previous post, and as you say it matters less for your model than the sensitivity of soot lofted per detonation, which seems to be the main contention.
Dear Stan,
Thanks for your work here, and it’s always great to see people doing a deep dive on nuclear winter and abrupt sunlight reduction scenarios (ASRS). As Alliance to Feed the Earth in Disasters (ALLFED) we are highly concerned about these issues and certainly feel that they are neglected, and our analysis also suggests that the field is high impact and cost effective to mitigate.
However, there are a number of points we would like to raise, where we differ, at least in part, with your analysis:
Thanks again for your work, and the openness with which it was conducted. It’s important to talk about and dig deep into these issues, and we hope others will do the same.
Hi Ulrik,
I would agree with you there in large part, but I don't think that should necessarily reduce our estimate of the impact away from what I estimated above.
For example, the Los Alamos team did far more detailed fire modelling vs Rutgers, but the end result is a model that seems to be unable to replicate real fire conditions in situations like Hiroshima, Dresden and Hamberg -> more detailed modeling isn't in itself a guarantee of accuracy.
However, the models we have are basing their estimates at least in part on empirical observations, which potentially give us enough cause for concern:
-Soot can be lofted in firestorm plumes, for example at Hiroshima.
-Materials like SO2 in the atmosphere from volcanoes can be observed to disrupt the climate, and there is no reason to expect that this is different for soot.
-Materials in the atmosphere can persist for years, though the impact takes time to arrive due to inertia and will diminish over time.
The complexities of modeling you highlight raise the uncertainties with everything above, but they do not disprove nuclear winter. The complexities also seem raise more uncertainty for Los Alamos and the more skeptic side, who rely heavily on modeling, than Rutgers, who use modeling only where they cannot use an empirical heuristic like the conditions of past firestorms.
Hi Ulrik, good to hear from you again!
We do not know what will happen if a nuclear weapon is again detonated offensively, other than that the world would be forever changed. This is a fear shared by pretty much everyone who deals with nuclear weaponry (including recent speeches at EAG London - such as John Gower, who we met before), and even without immediate retaliation the expected probability of a large scale future exchange would rise hugely in such a world. That's what I meant about the "all bets are off" line.
Certainly, many countries would seek to acquire weapons under this scenario (especially if the use was against a non nuclear power, which breaks a further taboo), and even if there are no further detonations in 30 days, the chances of a full scale exchange in such a world may rise by an order of magnitude.
I'm not sure that second projection is correct, and I put the mean projected additional detonations at higher levels. However, even if it is an accurate projection, I think the core point of the article holds: An offensive detonation significantly raises the probability of large exchanges, and there is a baseline risk of such an exchange today anyway -> Large exchanges with thermonuclear weaponry risk nuclear winters -> this is worth considering in our calculus around the expected impacts of nuclear warfare.
I feel there are a few things here:
My point from the article is that:
Quick responses Vasco!
The 3.2 Tg figure is their figure for the worst case scenario, based on 1 g/cm2 fuel loading. In their later paper they discuss this may be too high for a 1 g/cm2 scenario, as you say they mention that their soot conversion was set to be high for caution, and they could have it an order of magnitude or so lower, which Rutgers do.
However, this presents a bit of an issue for us in my calculations and factors. I'm comparing headline results there, and the 3.2 is the headline worst case result. It could be that they actually meant that the 100 fires generated just 0.32 Tg of soot total (or less), and we could take that as a fair comparison, but then we have a further issue in that Hiroshima led to an estimated 0.02 TG alone, meaning that seems to raise questions on if they're calibrated correctly.
Again, you can assume that maybe India/Pakistan just don't have the fuel to burn, maybe that keeps you that low, but then it's not a relevant factor of comparison for a full scale exchange on dense cities which do have the fuel. Either way, for the full scale comparison, it returns to firestorms: will they form? Assumptions around fuel loading/combustion feed into that, but that's the core.
Hi Vasco
I'm not sure I follow this argument: almost all of the above were serious fires but not firestorms, meaning that they would not be expected to effectively inject soot. We did not see 100+ firestorms in WW2, and the firestorms we did see would not have been expected to generate a strong enough signal to clearly distinguish it from background climate noise. That section was simply discussing firestorms, and that they seem to present a channel to stratospheric soot?
Later on in the article I do discuss this, with both Rutgers and Lawrence Livermore highlighting that firestorms would inject a LOT more soot into the stratosphere as a percentage of total emitted.
Hi Daniel,
Sorry, I only just saw your comment!
I think Lysenko and Lysenkoism is completely fascinating, but kind of proves the quote above.
Lysenko was a biologist of sorts whose falsified, confused and just invented results on plants supported Stalinist and Marxist thinking on how people are not innate but created by environments, and then got brought into GOSPLAN to bring these insights to the economy. This is not because there was a lack of brilliant economists initially, just that those Stalin had were either cringing on his party lines, hidden in side posts for their own good, or dead.
The problem was both to solve a complex problem (economics) and do it in a way that was acceptable to your masters and Marxist thinking of the time, which made the problem more complex than rocket science.
Once we move past Stalin (Red Plenty is very readable on this!) we get people like Kantorovich stepping out of the shadows. They were really smart, inventing new tools we use today and were really brilliant thinkers, but still had to solve not only the problem of the maths, but also the difficulty in understanding the people who they were supposedly commanding and their complexity and agency. On top of this, some tools and analysis are still forbidden to you.
Compare this with the rocket programme. Brilliant scientists again, solving really difficult problems, but orbital mechanics does not shift its behaviour to ruin your plan based on complicated politics (you may have missed an interaction, but they're a property of your materials and physical forces), and solving physics equations does not contradict Marxist thought (mostly, E=MC2 was banned for a period as it apparently contradicted Marx).
The point of the Soviet Union's failure, or that quote, was not that if it had a few more smart economists or thinkers they would have succeeded, or that economists are somehow better than physicists. The point was that they were trying to do something that could not be done with their or our technology: fully tame and control complexity like it was a space rocket.
Hi Ed!
One thing that falls potentially into all three categories of difficulty is food stocks/reserves, which is an issue with high relevance to exposure to shocks and food insecurity, but really hard to track.
It's a tricky issue, but could really help many researchers inside and outside of EA to improve!
A few issues we have found which would be very useful to see developed are:
The USDA PSD and FAOSTAT both have estimates for crop year end, but as crop years do not line up effective stocks are higher than this figure. These results are based on a few methodologies, but do not match reality exactly, and are better for globally traded crops.
Reconcilliation and improving on these estimates is possible, but requires detailed trade data or insider data, which is very commercially sensitive often. Big traders (ABCD companies/COFCO) would know this, but they do not disclose.
Stocks can be reasonably accurate at a global level and when averaged over a period of time, however fluctuations in demand, smuggling and delays in data releases mean they can be hard to track on a country by country basis for the poorest. These are often the countries we care most about for food insecurity.
Stocks in strategic reserves, private reserves and simply in transit can be difficult to divide out. In some cases this suggests chunks of available stocks are missing, or that stocks would not be available to the market if classed as "private" when actually state controlled.
Hi Stan + others.
Around one year after my post on the issue, another study was flagged to me: "Latent Heating Is Required for Firestorm Plumes to Reach the Stratosphere" (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022JD036667). The study raises another very important firestorm dynamic, that a dry firestorm plume has significantly less lofting versus a wet one due to the latent heat released as water moves from vapor to liquid - which is the primary process for generating large lofting storm cells. However, if significant moisture can be assumed in the plume (and this seems likely due to the conditions at its inception) lofting is therefore much higher and a nuclear winter more likely.
The Los Alamos analysis only assesses a dry plume - and this may be why they found so little risk of a nuclear winter - and in the words of the authors: "Our findings indicate that dry simulations should not be used to investigate firestorm plume lofting and cast doubt on the applicability of past research (e.g., Reisner et al., 2018) that neglected latent heating".
This has pushed me further towards being concerned about nuclear winter as an issue, and should also be considered in the context of other analysis that relies upon the Reisner et al studies originating at Los Alamos (at least until they can add these dynamics to their models). I think this might have relevance for your assessments, and the article here in general.