MH

Michael Hinge

296 karmaJoined

Comments
11

Hi Stan + others.

Around one year after my post on the issue, another study was flagged to me: "Latent Heating Is Required for Firestorm Plumes to Reach the Stratosphere" (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022JD036667). The study raises another very important firestorm dynamic, that a dry firestorm plume has significantly less lofting versus a wet one due to the latent heat released as water moves from vapor to liquid - which is the primary process for generating large lofting storm cells. However, if significant moisture can be assumed in the plume (and this seems likely due to the conditions at its inception) lofting is therefore much higher and a nuclear winter more likely.

The Los Alamos analysis only assesses a dry plume - and this may be why they found so little risk of a nuclear winter - and in the words of the authors: "Our findings indicate that dry simulations should not be used to investigate firestorm plume lofting and cast doubt on the applicability of past research (e.g., Reisner et al., 2018) that neglected latent heating".

This has pushed me further towards being concerned about nuclear winter as an issue, and should also be considered in the context of other analysis that relies upon the Reisner et al studies originating at Los Alamos (at least until they can add these dynamics to their models). I think this might have relevance for your assessments, and the article here in general.

Dear Stan.

I think there are issues with this analysis. As it stands, it presents a model of nuclear winter if firestorms are unlikely in a future large scale nuclear conflict. That would be an optimistic take, and does not seem to be supported by the evidence:

  • In my post on the subject that you referenced, I discuss how there are serious issues with coming to a highly confident conclusion in relation to nuclear winter. There are only limited studies, which come at the issue from different angles, but to broadly summarize:
    • Rutgers are highly concerned about the threat of nuclear winter via soot lofting in firestorms. They look at fission and fusion weaponry.
    • Los Alamos find that firestorms are highly unlikely to form under nuclear detonations, even at very high fuel loads, and so lofting is negligible. They only look at fission scale weaponry.
    • Lawrence Livermore did not comment on the probability of firestorms forming, just that if they did form there is a significant probability that soot would be lofted. They only look at fission scale weaponry.
  • Comparing the estimates, the main cause of the differences in soot injection are if firestorms will form. Conditional on firestorms forming, my read of the literature is that at least significant lofting is likely to occur - this isn’t just from Rutgers.
  • We know that firestorms from nuclear weaponry are possible, we have seen one in Hiroshima and it had a plume that reached stratospheric levels (the anvil shaped cloud photograph is it reaching and breaching the stratospheric barrier). Los Alamos cannot replicate this in their model, even at high fuel loads they get nothing like our observations of the event. This failure to replicate observations makes me very cautious to weigh their results heavily versus the other two studies, as you implicitly do via a mean soot injection of 0.7 Tg following 100 detonations, which is a heavy skew towards “no firestorms”.
  • Fusion (Thermonuclear) weaponry is often at least an order of magnitude larger than the atomic bomb dropped on Hiroshima. This may well raise the probability of firestorms, although this is not easy to determine definitively. It is however another issue when projecting a study on the likelihood of firestorms under atomic bombs onto thermonuclear weaponry.
  • Not all detonations will cause firestorms - Nagasaki did not due to the location of the blast and local conditions, and this is likely to be true of a future war even with thermonuclear weaponry. However, given projected lofting if they do occur (which is only modeled in Rutgers and Lawrence Livermore as a full firestorm only forms in their models) you only need maybe 100 or so firestorms to cause a serious nuclear winter. This may not be a high bar to reach with so many weapons in play.
  • As a result, blending together the Los Alamos model with that of Rutgers doesn’t really work as a baseline, they’re based on a very different binary concerning firestorms and lofting and you exclude other relevant analysis, like that of Lawrence Livermore. Instead, you really need to come up with a distribution of firestorm risk - however you choose to do so - and use that to weight the expected soot injection. I would assume such analysis would seriously raise the projected soot that is injected and subsequent cooling versus your assumptions.

In addition, there are points to raise on the distribution of detonations - which seems very skewed towards the lower end for a future nuclear conflict between great powers with thousands of weapons in play and strong game theoretic reasons to “use or lose” much of their arsenals. However, we commented on that in your previous post, and as you say it matters less for your model than the sensitivity of soot lofted per detonation, which seems to be the main contention.

Dear Stan,

Thanks for your work here, and it’s always great to see people doing a deep dive on nuclear winter and abrupt sunlight reduction scenarios (ASRS). As Alliance to Feed the Earth in Disasters (ALLFED) we are highly concerned about these issues and certainly feel that they are neglected, and our analysis also suggests that the field is high impact and cost effective to mitigate.

However, there are a number of points we would like to raise, where we differ, at least in part, with your analysis:

  • We assign a higher probability that a nuclear conflict occurs compared to your estimates, and also assume that conditional on a nuclear conflict occurring that higher detonation totals are likely. This raises the likelihood and severity of nuclear winters versus your estimates.
     
    • The weightings of the commenters vs Metaculus is your choice, but we would suggest that a prediction market result should have a higher weighting, due to its aggregation of a large amount of expert opinions. 
    • Your probability analysis excludes some high quality work (such as peer reviewed publications) which have a higher probability of nuclear conflict, potentially at 1% annually. This would primarily be via the risk of an inadvertent exchange, however the dynamics of an inadvertent exchange can be more damaging versus a deliberate conflict, as a response to an enemy launch is likely to focus more on critical infrastructure rather than weapon sites (which are assumed to have just fired at you).
    • The threshold for a catastrophic nuclear war in the XPT was very high - causing at least 10% of humanity to die over 5 years or less - and so should be considered as the probability of a nuclear conflict killing at least 800 million people, rather than a nuclear exchange.
    • However, your readers can also adjust this themselves in their heads reasonably easily if they wish, with a similar scale adjustment to the impact factor.
    • In terms of expected weapon detonations given at least 100, we also feel like your estimate of a uniform distribution is too low, and the logic of nuclear warfare suggests that “use it or lose it” would apply for at least the vulnerable land based weapon systems (including bombers). This pushes the distribution of detonations for a future NATO/Russia/China exchange towards the upper point of deployed weaponry, rather than a more uniform/skewed towards lower values distribution. This raises the expected severity of the following nuclear winter, if one occurs.
    • Soot lofting is very complicated and has serious uncertainties, but our estimates are that far higher levels are possible versus your estimates. Dividing Reisner 2018 (which cannot replicate real world firestorms) by 30 may be driving this, as well as your lower assumptions for detonations in a nuclear conflict.
  • We estimate that the expected mortality from supervolcanic eruptions (VEI 8+) would be comparable to VEI 7 eruptions, so their inclusion could increase cost effectiveness significantly.
  • We feel that you are selling short the importance of research in building resilience to nuclear winters in particular and ASRSs in general (page 27, and page 41 onwards), possibly by conflating research with just one of its subsets (pilot schemes and field tests of resilient food technologies). 
     
    • Research covers many activities, and a good amount of the analysis you link to in the report is based upon fundamental research of the likely dynamics of food consumption, production and trade, which did not exist before organizations like ALLFED started working on them.
    • These issues are highly complex and understudied, there is a significant risk of ineffective or even counterproductive actions if one rushes in without proper consideration, so new policy advocacy and engagement should result from careful consideration. 
    • Research at ALLFED covers many different fields, for example analyzing nutrition and diets in these scenarios, the likely production/yields of these sources under extreme conditions, the cost of their production and the likely dynamics of trade, accessibility, pricing and storage. In addition, we are proposing gathering some experimental data or carrying out pilot studies in cases where it would generate useful insights or build capacity, but this is only part of the story.
    • The impact on the long-term future is likely to be relatively larger from the most extreme catastrophes than the mortality, which is a further reason that we focus on the larger scenarios. Of course some of this work could provide tangible benefits for tackling smaller scale events too.
    • For example, you highlight uncertainty about the impact of novel resilient foods in all but the largest scenarios, as they can only provide around 19% of global calories in a no-international food trade scenario. Research is a way of bridging this gap in understanding by getting to the core of where they could be useful. For example, where might prices go in a variety of scenarios? How resilient are the different food sources to the different shocks, and how much would they cost to produce? Can they integrate into the food system as feed or biofuels to free up human edible foods? A 1% shock to output leads to around a 7% rise in prices, being able to produce 19% more food at short notice is not a trivial factor in many crises of varying severity, and resilience like that could save millions of lives. 
    • Some resilient foods are already cost effective for small quantities, such as seaweed and greenhouses, so they would be scaled up in lesser shocks. Also, we think of crop relocation to existing planted areas and crop area expansion as resilient foods, and these are likely to be a big part of the response in lesser catastrophes (and these are not included in the 19% figure).
    • Overall, we see research as the foundation on which you then build the policy work and other actions. Broadening and strengthening this foundation is therefore vital in allowing the work that finally effects change to occur - it isn’t an either/or. Now that there is a solid enough research base, it is possible to take some policy action, hence ALLFED’s expansion into this area of work, but more research will allow better and additional resilience-building in the future.

Thanks again for your work, and the openness with which it was conducted. It’s important to talk about and dig deep into these issues, and we hope others will do the same.

Hi Ulrik,

I would agree with you there in large part, but I don't think that should necessarily reduce our estimate of the impact away from what I estimated above. 

For example, the Los Alamos team did far more detailed fire modelling vs Rutgers, but the end result is a model that seems to be unable to replicate real fire conditions in situations like Hiroshima, Dresden and Hamberg -> more detailed modeling isn't in itself a guarantee of accuracy.

However, the models we have are basing their estimates at least in part on empirical observations, which potentially give us enough cause for concern:

-Soot can be lofted in firestorm plumes, for example at Hiroshima.

-Materials like SO2 in the atmosphere from volcanoes can be observed to disrupt the climate, and there is no reason to expect that this is different for soot.

-Materials in the atmosphere can persist for years, though the impact takes time to arrive due to inertia and will diminish over time.

The complexities of modeling you highlight raise the uncertainties with everything above, but they do not disprove nuclear winter. The complexities also seem raise more uncertainty for Los Alamos and the more skeptic side, who rely heavily on modeling, than Rutgers, who use modeling only where they cannot use an empirical heuristic like the conditions of past firestorms.

Hi Ulrik, good to hear from you again!

We do not know what will happen if a nuclear weapon is again detonated offensively, other than that the world would be forever changed. This is a fear shared by pretty much everyone who deals with nuclear weaponry (including recent speeches at EAG London - such as John Gower, who we met before), and even without immediate retaliation the expected probability of a large scale future exchange would rise hugely in such a world. That's what I meant about the "all bets are off" line.

Certainly, many countries would seek to acquire weapons under this scenario (especially if the use was against a non nuclear power, which breaks a further taboo), and even if there are no further detonations in 30 days, the chances of a full scale exchange in such a world may rise by an order of magnitude.

I'm not sure that second projection is correct, and I put the mean projected additional detonations at higher levels. However, even if it is an accurate projection, I think the core point of the article holds: An offensive detonation significantly raises the probability of large exchanges, and there is a baseline risk of such an exchange today anyway -> Large exchanges with thermonuclear weaponry risk nuclear winters -> this is worth considering in our calculus around the expected impacts of nuclear warfare.

I feel there are a few things here:

  • Los Alamos claims that they are being pessimistic, but then end up with very low soot conditions compared to observations.
  • They claim that firestorms are difficult to form with 15kt weapons. If their logic holds this may be accurate, due to the circle of blast damage nearly overlapping with the circle of fire damage (see the map above), but that wouldn't be the case as weapons get larger (see the 100kt + weapon circles). This makes their conclusion less relevant for larger exchanges.
  • Their claimed soot lofting in the 72.6 g/cm2 scenaro is still very low. They claim the fire is in the "firestorm regime", but it again doesn't seem to meet observations (of lofting post Hiroshima for example, with the photo above). This also contradicts other modeling as well as the few observations we have of firestorms: both Rutgers and Lawrence Livermore model that soot would be far more effectively lofted than their model.

My point from the article is that:

  • There is uncertainty, BUT:
  • Some of the Los Alamos critiques may not apply at larger weapon sizes and larger exchanges.
  • Los Alamos' modeling may or may not be correct, we have credible reasons to be concerned about soot lofting from multiple other sources (both observations and models), and there are questions on some of their key model outputs.
  • If Los Alamos is correct in all of their modeling, and it all holds for larger exchanges, then there would likely be no climate shock. If any of the points raised above hold, there is a threat, even without the most pessimistic of Rutger's projections.
  • Therefore: nuclear winter is a threat.

Quick responses Vasco!

The 3.2 Tg figure is their figure for the worst case scenario, based on 1 g/cm2 fuel loading. In their later paper they discuss this may be too high for a 1 g/cm2 scenario, as you say they mention that their soot conversion was set to be high for caution, and they could have it an order of magnitude or so lower, which Rutgers do.

However, this presents a bit of an issue for us in my calculations and factors. I'm comparing headline results there, and the 3.2 is the headline worst case result. It could be that they actually meant that the 100 fires generated just 0.32 Tg of soot total (or less), and we could take that as a fair comparison, but then we have a further issue in that Hiroshima led to an estimated 0.02 TG alone, meaning that seems to raise questions on if they're calibrated correctly.

Again, you can assume that maybe India/Pakistan just don't have the fuel to burn, maybe that keeps you that low, but then it's not a relevant factor of comparison for a full scale exchange on dense cities which do have the fuel. Either way, for the full scale comparison, it returns to firestorms: will they form? Assumptions around fuel loading/combustion feed into that, but that's the core.

Hi Vasco

I'm not sure I follow this argument: almost all of the above were serious fires but not firestorms, meaning that they would not be expected to effectively inject soot. We did not see 100+ firestorms in WW2, and the firestorms we did see would not have been expected to generate a strong enough signal to clearly distinguish it from background climate noise. That section was simply discussing firestorms, and that they seem to present a channel to stratospheric soot?

Later on in the article I do discuss this, with both Rutgers and Lawrence Livermore highlighting that firestorms would inject a LOT more soot into the stratosphere as a percentage of total emitted.

Hi Daniel,

Sorry, I only just saw your comment!

I think Lysenko and Lysenkoism is completely fascinating, but kind of proves the quote above. 

Lysenko was a biologist of sorts whose falsified, confused and just invented results on plants supported Stalinist and Marxist thinking on how people are not innate but created by environments, and then got brought into GOSPLAN to bring these insights to the economy. This is not because there was a lack of brilliant economists initially, just that those Stalin had were either cringing on his party lines, hidden in side posts for their own good, or dead

The problem was both to solve a complex problem (economics) and do it in a way that was acceptable to your masters and Marxist thinking of the time, which made the problem more complex than rocket science.

Once we move past Stalin (Red Plenty is very readable on this!) we get people like Kantorovich stepping out of the shadows. They were really smart, inventing new tools we use today and were really brilliant thinkers, but still had to solve not only the problem of the maths, but also the difficulty in understanding the people who they were supposedly commanding and their complexity and agency. On top of this, some tools and analysis are still forbidden to you.

Compare this with the rocket programme. Brilliant scientists again, solving really difficult problems, but orbital mechanics does not shift its behaviour to ruin your plan based on complicated politics (you may have missed an interaction, but they're a property of your materials and physical forces), and solving physics equations does not contradict Marxist thought (mostly, E=MC2 was banned for a period as it apparently contradicted Marx).

The point of the Soviet Union's failure, or that quote, was not that if it had a few more smart economists or thinkers they would have succeeded, or that economists are somehow better than physicists. The point was that they were trying to do something that could not be done with their or our technology: fully tame and control complexity like it was a space rocket.

Hi Ed!

One thing that falls potentially into all three categories of difficulty is food stocks/reserves, which is an issue with high relevance to exposure to shocks and food insecurity, but really hard to track. 

It's a tricky issue, but could really help many researchers inside and outside of EA to improve!

A few issues we have found which would be very useful to see developed are:

The USDA PSD and FAOSTAT both have estimates for crop year end, but as crop years do not line up effective stocks are higher than this figure. These results are based on a few methodologies, but do not match reality exactly, and are better for globally traded crops. 

Reconcilliation and improving on these estimates is possible, but requires detailed trade data or insider data, which is very commercially sensitive often. Big traders (ABCD companies/COFCO) would know this, but they do not disclose.

Stocks can be reasonably accurate at a global level and when averaged over a period of time, however fluctuations in demand, smuggling and delays in data releases mean they can be hard to track on a country by country basis for the poorest. These are often the countries we care most about for food insecurity.

Stocks in strategic reserves, private reserves and simply in transit can be difficult to divide out. In some cases this suggests chunks of available stocks are missing, or that stocks would not be available to the market if classed as "private" when actually state controlled.

Load more