[Important Edit: I have realised there was an error in my model of how much we will emit, as used the wrong measure of carbon intensity (CO2/$ rather than CO2/kwh). Consequently, I now use a simplified form of the Kaya identity. This suggests that the risk of extreme warming is higher than I initially said. Thanks to Johannes Ackva for pointing this out.]
Understanding the probability of extreme warming of more than 6, 8 or 10 degrees is highly consequential for understanding how we should prioritise climate change relative to other global catastrophic risks. How hot it will get depends on:
● How much we emit
● How sensitive the climate is to emissions
Here, I construct a model of each of these uncertain questions. I conclude that:
- Assigning a probability distribution to a broad range of possible ‘business as usual’ scenarios up to 2100, on what I believe to be the most plausible estimate of climate sensitivity, the probability of eventual warming of more than 6 degrees is around 6% and of more than 10 degrees is 1 in 1000.
- Assigning a probability distribution to a broad range of possible ‘business as usual’ scenarios up to 2200, on what I believe to be the most plausible estimate of climate sensitivity, the probability of eventual warming of more than 6 degrees is around 16% and of more than 10 degrees is around 1%.
This suggests a lower risk of extreme warming than other leading estimates, such as from Wagner and Weitzman. This is due to differences in priors across climate sensitivity. Nonetheless, the probability of extreme warming is uncomfortably high and strong mitigation remains imperative.
There are two forces here pushing in different directions. On the one hand, many estimates of climate sensitivity are too high due to the faulty use of Bayesian statistics. On the other, focusing only on the most likely 'business as usual' pathway ignores the downside risk of higher-than-expected emissions, due, for example, to surprising economic growth or population growth. Overall, it looks as though the risk is lower than some leading estimates, but still worth worrying about.
I am grateful to Johannes Ackva and Will MacAskill for thoughts and comments. Mistakes are my own.
1. How much will we emit?
How much we emit depends on choices we make. When we are trying to understand how bad climate change could be, I think it is most useful to try to understand how much we will emit if things roughly carry on as they have been doing over the last 20 or 30 years. This gives us a baseline or ‘business as usual’ set of scenarios which allow us to understand how much danger we are in if we don’t make extra efforts to decarbonise relative to what we are doing at the moment.
Existing literature
There are estimates of how much we are likely to emit in the literature. Rogelj et al (2016) provides a good overview of the literature:
_____________________
The bars here show the median estimate of emissions across different emissions scenarios, and the vertical black lines show the range due to scenario spread - though it is unclear from the text what confidence interval this is supposed to depict. INDCs are Intended Nationally Determined Contributions that countries have made in accordance with the Paris Agreement.
The bottom of the conditional INDC scenario to the top no policy scenario spreads from 2 trillion tonnes of CO2 to 7 trillion tonnes of CO2. This is equivalent to the bottom end of RCP 4.5 (the medium-low emissions pathway) and the top end of RCP 8.5 (the high emissions pathway). Median cumulative emissions on current policies is 3.5 trillion tonnes of CO2, which is about the middle of RCP6.0 (the medium high emissions pathway). You can check how cumulative emissions correspond to emissions pathways with this table:
My own model of likely emissions
It remains somewhat unclear from the Rogelj et al (2016) estimate how probability should be distributed across these scenarios. How plausible is a global no policies scenario, for example? Thus, I have constructed a model myself which tries to give a plausible probability density function across a range of emissions scenarios. To do this, I have given different estimates of the three parameters in the Kaya Identity:
Total cumulative CO2 emissions is the product of three factors: (1) human population, (2) GDP per capita, (3) carbon intensity (emissions per $).
My estimate of these parameters:
● Uses existing estimates of the likely trends in these parameters over the century, where available
● Where these are not available, I extrapolate for the trends over the past 30 or so years in the parameters of interest.
The model is here. It includes:
● Three estimates up to 2100 of the likely range of business as usual emissions.
○ One is based on extrapolating growth in GDP per capita from the last 30 years
○ Another is based on the Christensen et al expert survey of forecasts of economic growth.
○ One assuming that there is an AI explosion leading to growth of 10% per year.
● One estimate of emissions to 2200 which extrapolates GDP per capita growth from our experience over the last 30 years.
The results are here:
(Note results will vary in the real model depending on when you refresh the model.)
There is large uncertainty about the parameters that make up the Kaya Identity and this produces large uncertainty about business as usual. For example, 2% economic growth produces an economy that is 5 times larger in 2100, whereas 3% growth produces an economy that is 10 times larger. The 95% confidence interval for UN population projections stretches from 9.5 billion to 13 billion people. This is why there is such uncertainty about how much we will emit, assuming that we make no extra effort to reduce emissions.
Emissions and CO2 concentrations to 2100
● The median business as usual scenario to 2100 is the medium-high emissions pathway (in the RCP6 range), which is roughly the same as what happens if countries continue on current policy
○ This corresponds to atmospheric CO2 concentrations of about 700ppm.
● The upper 5% bound of cumulative emissions is beyond the RCP8.5 range.
Emissions and CO2 concentrations to 2200
If we continue on current trends up to 2200, then the median pathway is 11 trillion tonnes, and there is a 5% chance of more than 31 trillion tonnes (which is bad news).
Flaws in the model
This model assumes that the parameters in the Kaya Identity are independent, which is false. So, the model should be taken with something of a grain of salt. Nevertheless, I do think it is useful for giving a fairly plausible range of uncertainty about what we could emit without making extra effort to mitigate.
2. How hot will it get?
The relationship between CO2 concentrations and warming is logarithmic: at least within a certain range, each doubling of concentrations produces the same amount of warming. Equilibrium climate sensitivity measures how much the planet warms after a doubling of CO2 concentrations, once the climate system has reached equilibrium. There is uncertainty about the true equilibrium climate sensitivity. The IPCC does not give a formal probability distribution function over equilibrium climate sensitivity, but instead states:
“Based on the combined evidence from observed climate change including the observed 20th century warming, climate models, feedback analysis and paleoclimate, as discussed above, ECS “is likely [>66% chance] in the range 1.5°C to 4.5°C with high confidence. ECS is positive, extremely unlikely [<1% chance] less than 1°C (high confidence), and very unlikely [<10% chance] greater than 6°C (medium confidence)”.[3]
Lamentably, this leaves the nature of the right tail of climate sensitivity very unclear. In Climate Shock, Wagner and Weitzman discuss how to convert this into a probability distribution function. They end up positing that the underlying distribution is lognormal,[4] which suggests a distribution over climate sensitivity that looks like this:
This is a heavy tailed distribution which, as we shall see, leaves us with a high chance of extreme warming.
The influence of uniform priors
I think this estimate of climate sensitivity and others like it are flawed. As far as I can tell, the heavy right tail produced in many IPCC estimates of climate sensitivity is entirely a product of the fact that these posterior distributions are updated from a uniform prior over climate sensitivity with an arbitrary cut-off at 10 degrees or 20 degrees. I checked some of the papers for IPCC models of climate sensitivity that have a long tail and they either: explicitly use a uniform prior which makes a large difference to tail behaviour,[5] or do not say whether or not they use a uniform prior (but I would guess that they do). When this is combined with the likelihood ratio from the data and evidence that we have, we end up with a posterior distribution with heavy right tails.
However, as Annan and Hargreaves (2011) have argued, the use of a uniform prior is unjustified. Firstly, climate scientists use these priors on the assumption that they involve “zero information”, but this is not the case. Secondly and relatedly, the cut-off is arbitrary. Why not have a cut-off at 50 degrees?
Thirdly, it is not the case that before analysing modern instrumental and paleoclimatic data on the climate, we would rationally believe that a doubling from pre-industrial levels of 280ppm to 560ppm would be equally likely to produce warming of 3 degrees or 20 degrees. In fact, before analysing modern data sets, scientists had settled on a 67% confidence range of between 1.5 and 4.5 degrees in 1979, and this likely range has barely changed since.[6] As Annan and Hargreaves note:
“This estimate was produced well in advance of any modern probabilistic analysis of the warming trend and much other observational data, and could barely have been affected by the strong multidecadal trend in global temperature that has emerged since around 1975. Therefore, it could be considered a sensible basis for a credible prior to be updated by recent data.”[7]
Arguments from physical laws also suggest that extreme values of 10 degrees or 20 degrees are extremely unlikely.
If we use a more plausible prior based on an expert survey by Webster and Sokolov, and update this with a likelihood ratio from modern data sets, the resulting posterior 95% confidence interval for climate sensitivity is 1.2–3.6 degrees.
For the sake of sensitivity analysis, Annan and Hargreaves also update using a prior following a Cauchy distribution with greatly exaggerated tails. This prior is quite extreme. It implies a probability of climate sensitivity exceeding 6 degrees of 18% and a probability of more than 15 degrees of 5%. It seems likely that the experts in 1979 without access to modern data sets would have thought this implausible. The posterior upper 95% confidence bound from this prior is 4.7 degrees. The influence of different priors is shown here:
How hot could it get?
In the second guesstimate model, I have modelled the implications of these different estimates of climate sensitivity for how hot it could get unless we make extra effort to reduce emissions. I have put the estimates of cumulative emissions from the first model and converted them into the corresponding 95% confidence interval for CO2 concentrations.[9] This gives us the correct 95% confidence interval, which is the main thing we are interested in because we want to know the tail risk. Unfortunately, it doesn’t give us the right median. (I’m not sure how to resolve this in Guesstimate).
Emissions to 2100
For simplicity, I just report results from the estimate of emissions that extrapolates from the Christensen et al economic growth forecasts.
● On the Wagner and Weitzman estimate of climate sensitivity, there is about a 15% chance of more than 6 degrees of eventual warming, and a 1% chance of more than 10 degrees
● On the Webster estimate, there is a 6% chance of 6 degrees of warming, and a 0.1% chance of more than 10 degrees.
● On the Cauchy estimate, there is a 14% chance of warming of more than 6 degrees and a 3% chance of more than 10 degrees.
Of these estimates, I think the Webster prior is the most plausible, and this suggests that the chance of 6 degrees is markedly lower than Wagner and Weitzman estimate, on business as usual.
Emissions to 2200
If we assume that we will continue business as usual past 2100:
● On the Wagner and Weitzman estimate of climate sensitivity, there is about a 24% chance of more than 6 degrees of warming, and a 6% chance of more than 10 degrees
● On the Webster estimate, there is a 16% chance of 6 degrees of warming, and a 1% chance of more than 10 degrees.
● On the Cauchy estimate, there is a 22% chance of warming of more than 6 degrees and a 4.3% chance of more than 10 degrees.
Conclusions
Climate Shock by Wagner and Weitzman is one of the best treatments of the importance of tail risk for climate change. Still, I think the very high tail risk suggested by Wagner and Weitzman’s model is a result of the mistaken use of uniform priors in IPCC models of climate sensitivity. If we just take the most likely emissions scenario without extra effort, the chance of 6 degrees is <1%, whereas Wagner and Weitzman estimate that it is >10%.
However, this effect is offset by the risk that emissions are much higher than expected. Once we account for the full distribution of plausible 'business as usual' scenarios, the risk of more than 5 degrees is 6%: still lower than Wagner and Weitzman, but certainly worth worrying about. The chance of 10 degrees of warming is in the 1 in 1000 range. 10 degrees has been posited as a plausible threshold at which climate change poses a direct existential risk.[10] This suggests that the direct existential risk of climate change remains a concern.
If we fail to get our act together by the 22nd century, cumulative emissions could be truly massive, with all the civilisational strife this entails.
Will MacAskill has also pointed out to me that if there is an AI explosion, energy demand will increase massively. I am not sure what to make of this possibility for how promising climate change is to work on as a problem.
Finally, it is worth mentioning that these estimates of climate sensitivity exclude some potentially important carbon cycle feedbacks. However, as I argue here, the median view in the literature is that these feedbacks are much less important than anthropogenic CO2 emissions. However, these feedbacks are understudied, so there is likely considerable model uncertainty.
[1] Joeri Rogelj et al., “Paris Agreement Climate Proposals Need a Boost to Keep Warming Well below 2 °C,” Nature 534, no. 7609 (June 30, 2016): 635, https://doi.org/10.1038/nature18307.
[2] IPCC, Climate Change 2013: The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, ed. T. F. Stocker et al. (Cambridge University Press, 2013), 27.
[3] IPCC, 84.
[4] Gernot Wagner and Martin L. Weitzman, Climate Shock: The Economic Consequences of a Hotter Planet (Princeton: Princeton University Press, 2015), 182–83.
[5] Olson Roman et al., “A Climate Sensitivity Estimate Using Bayesian Fusion of Instrumental Observations and an Earth System Model,” Journal of Geophysical Research: Atmospheres 117, no. D4 (February 21, 2012), https://doi.org/10.1029/2011JD016620; Lorenzo Tomassini et al., “Robust Bayesian Uncertainty Analysis of Climate System Properties Using Markov Chain Monte Carlo Methods,” Journal of Climate 20, no. 7 (April 1, 2007): 1239–54, https://doi.org/10.1175/JCLI4064.1.
[6] J. D. Annan and J. C. Hargreaves, “On the Generation and Interpretation of Probabilistic Estimates of Climate Sensitivity,” Climatic Change 104, no. 3–4 (February 1, 2011): 429–30, https://doi.org/10.1007/s10584-009-9715-y.
[7] Annan and Hargreaves, 429–30.
[8] Annan and Hargreaves, 431.
[9] This conversion is based on Malte Meinshausen et al., “The RCP Greenhouse Gas Concentrations and Their Extensions from 1765 to 2300,” Climatic Change 109, no. 1–2 (November 1, 2011): Table 4, https://doi.org/10.1007/s10584-011-0156-z.
[10] Martin L. Weitzman, “Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change,” Review of Environmental Economics and Policy 5, no. 2 (July 1, 2011): 275–92, https://doi.org/10.1093/reep/rer006; Steven C. Sherwood and Matthew Huber, “An Adaptability Limit to Climate Change Due to Heat Stress,” Proceedings of the National Academy of Sciences 107, no. 21 (May 25, 2010): 9552–55, https://doi.org/10.1073/pnas.0913352107.
Hi John,
Thanks for the clarifications and responses!
Regarding your points:
1. Thanks for clarifying the meaning -- so it is not a worst case, but more a baseline where extra effort would be going beyond what we currently see.
It still seems to me what you model is significantly more pessimistic than that.
I think average marginal carbon prices are not a good proxy of overall climate policy effort, because carbon prices are usually not the (i) only climate policy, (ii) mostly not the dominant climate policy (possible exceptions of Sweden and British Columbia, but those are both negligible jurisdictions in terms of emissions) and (iii) other much stronger policies exist and drive carbon intensity reductions.
E.g. we both mention renewables, electric mobility and advanced nuclear as (potentially) important influences on carbon intensity trends, yet none of those has been brought about by carbon pricing policies, but by innovation and deployment policy. Across Europe, progressive states in the US, and China, we have fairly aggressive policies to stimulate low-carbon tech, often with implied carbon prices (technology specific and realized via subsidies) in the 100s USD/tCO2 range.
So, I think even without extra effort, there are significant efforts underway to drive cost differentials down, at least for electric power and light-duty transport, and that is very clearly the result of climate policy (plus air pollution policy).
This is far from enough, but I don’t think it is well-proxied by the state of average carbon pricing policy.
2.
a. On China: Yes, the growth factor is in the growth parameter, but it is *also* in the intensity parameter as a weight, in the same period in which China rises quickly by burning lots of coal its economic importance also increases strongly (i.e. its weight in defining the trend).
I would agree that we should expect developing countries to escape poverty as cheaply as possible, though the other aspect there is that the sheer centralized action capacity and population size are anomalous for the Chinese case. Plus, availability and price of natural gas and renewables have somewhat changed since China’s decision to go all the way with coal.
b. Climate policy kicking off: I think we are talking about different things here. Yes, global climate policy is very weak and I would agree with you that we should, for example, not necessarily expect a change in trajectory from the Paris Agreement.
But despite that, strong climate policy exists in some places and will affect carbon intensity once championed technologies do scale. And this is new and this has not been reflected in carbon intensity yet but likely will.
c. Technologies in store: (I actually think the most significant technology for this to date will be electric mobility.) But even if it is solar and wind, I don’t think that “what solar and wind have done in Germany so far” is a good proxy for “what the technologies accelerated by some governments will do worldwide”, because (i) Germany isn’t very sunny, (ii) we phased out nuclear at the same time (genius, I know!), and (iii) we are already experiencing value deflation which most parts of the world will reach significantly later. (iv) Plus, the share of electrification and thereby the impact of low-carbon electric sources will already increase in a “no extra effort” case (v) And we are still in the beginning of seeing the impact of those technologies globally (the data from which you extrapolate the intensity ends in 2014).
d. New technologies in store: CCS and advanced nuclear both might or might not happen and I hope we can make them more likely to happen and happen faster, but at least for Europe and progressive parts of the US carbon prices in the range of USD 50 by 2030 (or comparable non-price policies) are part of my prediction of “no extra effort”. I agree with the relative evaluation of CCS and advanced nuclear.
e. Political coordination: I think both your and my “no extra effort” case assume essentially zero political coordination. When you assume carbon intensity trends going forward based on the last 30 years (and those end in 2014, i.e. pre-Paris), where there was very little coordination on emissions (in the grand scheme of things, Kyoto doesn’t really matter), there being even less coordination might be a plausible worst case, but just assuming continued no coordination should not change the estimate much. Likewise, I think your estimate is pessimistic not because I am more optimistic about global coordination, but because I think you underplay the non-coordinated-but-present efforts by some governments to change relative cost. If they have some effect, then carbon intensity declines in the future should be higher than in the last 30 years as a matter of default no-extra-effort-prediction.
g. Breakdown of cooperation / arms race: I agree with that. That should widen our range of estimates, not sure it should shift the median much (but the mean).
4. Negative emissions: As discussed above, I think also in the no-extra-effort scenario there is significant effort do enable low-carbon tech, and it seems a fairly pessimistic assumption that by the end of the century we will not have at least some cheap negative emissions tech (not necessarily enough to offset all emissions, but significantly more than having no effect in expectation). This is not the world I am seeing when I see what UK, EU, progressive governments in US are doing to further technological development. We are not in a world where no one is trying to make low-carbon solutions succeed and get cheaper.
And in particular, it seems hard to imagine a world with high climate sensitivity, high growth and no one attempting to bring down the cost of negative emissions approaches.
This seems quite at odds with typical dynamics of higher problem severity and higher capability driving a more active search for solutions, of which negative emissions are attractive because they can still work after we failed on having foresight early on and avoid some of the more unpredictable risks of geo-engineering.
On geo-engineering: You seem to answer a different question here, the value of geo-engineering. But if the question of the model is, “how hot will it get?”, then I think it makes sense to make an explicit assumption about when you would expect it being used based on empirical expectation.
In terms of conclusion
You write:
“All of this suggests that the estimates of carbon intensity decline might be biased a bit upwards. The most important factors seem to be decline in costs of renewables, electric cars and potentially advanced nuclear, as well as factors e and f.”
I think that downplays the issue and it conflates two distinct effects as if they affected the same variable (carbon intensity), which they do not.
From your list a-d (and g?) are responses to effects on carbon intensity (in my list the points under II).
From your list e-f and the issues under III in my list affect the probability that all four variables driving warming (population, GDP per capita, carbon intensity, climate sensitivity) vary in the same direction with regards to their effect on overall warming probabilities, which is probably less likely (we agree on that) and thereby will have an effect on expected warming quite different from the potential upward bias in carbon intensity.
This latter point is very different from arguing for a mean/median change in carbon intensity decline rate.
As you suggest, I will try to play around with the model a bit and see what the effects of these different assumptions are. Thanks for the good discussion!