Hide table of contents

A paper I have written on a form of geoengineering known as 'stratospheric aerosol injection' has recently been published in Futures. The paper explores whether, assuming that reducing existential risk is overwhelmingly important, stratospheric aerosol injection should be researched. The following aspects are likely to be of some interests to EAs:

  • It provides, to my knowledge, the most comprehensive existing discussion of the scale of the existential risk posed by climate change. 
  • It provides the most comprehensive and up to date discussion of geoengineering from an existential risk reduction point of view. 
  • The framework it uses to discuss the problem of 'moral hazard' may be of use in other domains (though the framework is David Morrow's, not my own).

The paper is available on my website, and on my academia page. All views are my own, not my employer's. All comments are welcome. 

 

===

Abstract: In the wake of the continued failure to mitigate greenhouse gases, researchers have explored the possibility of injecting aerosols into the stratosphere in order to cool global temperatures. This paper discusses whether Stratospheric Aerosol Injection (SAI) should be researched, on the controversial ethical assumption that reducing existential risk is overwhelmingly morally important. On the one hand, SAI could eliminate the environmental existential risks of climate change (arguably around a 1% chance of catastrophe), and reduce the risks of interstate conflict associated with extreme warming. Moreover, the risks of termination shock and unilateral deployment are overstated. On the other hand, SAI introduces risks of interstate conflict which are very difficult to quantify. Research into these security risks would be valuable, but also risks reducing willingness to mitigate. I conclude that the decision about whether to research SAI is one of ‘deep uncertainty’ or ‘complex cluelessness’, but that there is a tentative case for research initially primarily focused on the governance and security aspects of SAI.

Highlights

  • It is uncertain whether Stratospheric Aerosol Injection(SAI) research is justifiable, but a tentative case be made for security-focused research.
  • SAI would eliminate the arguable environmental existential risks of climate change (<1% – 3.5%).
  • It is extremely unclear whether SAI would reduce willingness to mitigate, and extensive efforts should be made to reduce the risk of mitigation obstruction.
  • Termination shock risk is overstated.
  • The risk of unilateral deployment is overstated, but SAI introduces other serious security risks.
Comments8


Sorted by Click to highlight new comments since:

I'd really appreciate a sentence or two on each of the following questions:

  • What is termination shock risk?
  • What is the main concern with unilateral deployment?
  • What is the worry re: interstate conflict?
[anonymous]3
0
0

termination shock: the worry that after SAI is deployed, it is for some reason stopped suddenly, leading to rapid and large warming. Unilteral deployment: the worry that a state or other actor would deploy SAI unilaterally in a way that would damage other states

The concern I have about interstate conflict is that: SAI will have to be deployed for decades up to a century to provide benefits. Over this period, there would need to be global agreement on SAI - a technology that would have divergent regional climatic effects. If there are adverse weather events (caused by SAI or not) victims would be angry and this could heighten interstate tension. Generally, maintaining agreement on something like that for decades seems like it would be really hard.

Thanks very much!

My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.

The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).

I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.

For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: https://arxiv.org/abs/0810.5515 and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.

[anonymous]1
0
0

I discuss this in the paper under the heading of 'unknown risks'. I tend to deflate their significance because SAI has natural analogues - volcanoes, which haven't set off said catastrophic spirals. The massive 1991 pinatubo eruption reduced global temperatures by 0.5 degreesish. There is also already an enormous amount of tropospheric cooling due to industrial emissions of sulphur and other particulates. The effects of this could be very substantial - (from memory) at most cancelling out up to half of the total warming effect of all CO2 ever emitted. Due to concerns about air pollution, we are now reducing emissions of these tropospheric aerosols. This could have a very substantial warming effect.

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely). Estimates of the sensitivity of the climate to CO2 are also beset by model uncertainty. The main worry is the unprecedented warming effect from CO2 having unexpected runaway effects on the ecosystem. It is clear that SAI would allow us to reduce global temperatures and so would on average reduce the risk of heat-induced tipping points or runaway processes. Moreover, SAI is controllable on tight timescales - we get a response to our action within weeks - allowing us to respond if something weird starts happening as a result of GHGs or of SAI. The downside risk associated with model uncertainty about climate sensitivity to GHGs is much greater than that associated with the effects of SAI, in my opinion. SAI is insurance against this model uncertainty.

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)

-

I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

[anonymous]0
0
0

I agree that mitigation should be prioritised.

SAI has advantages that other approaches don't have, which is why it is insurance against model uncertainty about the sensitivity of the climate to GHGs. Carbon dioxide removal is much slower acting, will be incredibly expensive and has other costs. The other main proposed form of solar geoengineering involves tropospheric cooling by brightening clouds etc. Uncertainties about this are probably greater than for SAI.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp