Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, “The Returns to Science in the Presence of Technological Risks.” The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income.
The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a “time of perils” commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction). In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year’s delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won’t end up having to go through the time of perils at all.
I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in the Existential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks.
What’s the upshot? From the report’s executive summary:
A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science. For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report’s model, the benefits of science will outweigh the costs. I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them).
On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred. These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates. The domain expert forecasts are high enough, for example, that if we think the future is “worth” more than 400 years of current social welfare, in one version of my model we would not want to accelerate science, because the health and income benefits would be outweighed by the increases in the remote but extremely bad possibility that new technology leads to the end of human civilization. However, if we accept the much lower forecasts of extinction risks from the superforecasters, then we would need to put very very high values on the long-run future of humanity to be averse to risking it.
Throughout the report I try to neutrally cover different sets of assumptions, but the report’s closing section details my personal views on how we should think about all this, and I thought I would end the post with those views (the following are my views, not necessarily Open Philanthropy’s).
My Take
I end up thinking that better/faster science is very unlikely to be bad on net. As explained in the final section of the report, this is mostly on the back of three rationales. First, for a few reasons I think lower estimates of existential risk from new biotechnology are probably closer to the mark than more pessimistic ones. Second, I think it’s plausible that dangerous biotech capabilities will be unlocked at some point in the future regardless of what happens to our scientific institutions (for example because they have already been discovered or because advances in AI from outside mainstream scientific institutions will enable them). Third, I think there are reasonable chances that better/faster science will reduce risks from new biotechnology in the long run, by discovering effective countermeasures faster.
In my preferred model, investing in science has a social impact of 220x, as measured in Open Philanthropy’s framework. In other terms, investing a dollar in science has the same impact on aggregate utility as giving a dollar each to 220 different people earning $50,000/yr. With science, this benefit is realized by increasing a much larger set of people’s incomes by a very small but persistent amount, potentially for generations to come.
That said, while I think it is very unlikely that science is bad on net, I do not think it is so unlikely that these concerns can be dismissed. Moreover, even if the link between better/faster science and increased peril is weak and uncertain, the risks from increased peril are large enough to warrant their own independent concern. My preferred policy stance, in light of this, is to separately and in parallel pursue reforms that accelerate science and reforms that reduce risks from new technologies, without worrying too much about their interaction (with some likely rare exceptions).
It’s a big report (74 pages in the main report, 119 pages with appendices) and there’s a lot more in it that might be of interest to some people. For a more detailed synopsis, check out the executive summary, the table of contents, and the summary at the beginning of section 11. For some intuition about the quantitative magnitudes the model arrives at, section 3.0 has a useful parable. You can read the whole thing on arxiv.
Thanks a bunch for this report! I haven't had the time to read it very carefully, but I've already really enjoyed it and am curating the post.
I'm also sharing some questions I have, my highlights, and my rough understanding of the basic model setup (pulled from my notes as I was skimming the report).
A couple of questions / follow-up discussions
Some things that were highlights to me
My rough understanding of the basic setup of the main(?) model
(Please let me know if you see an error! I didn't check this carefully.)
S 2.2. "Synthetic biology is not the only technology with the potential to destroy humanity - a short list could also include nuclear weapons, nanotechnology, and geoengineering. But synthetic biology appears to be the most salient at the moment. [...]"
Although I might not be able to respond much/fast in the near future
Most actions targeted at some goal A affect a separate goal B way less than an action that was taken because it targeted goal B would have affected B. (I think this effect is probably stronger if you filter by "top 10% most effective actions targeted at these goals, assuming we believe that there are huge differences in impact.) If you want to spend 100 resource units for goals A and B you should probably just split the resources and target the two things separately instead of trying to find things that look fine for both A and B.
(I think the "barbell strategy" is a related concept, although I haven't read much about it.)
(for some reason the thing that comes to mind is this SSC post about marijuana legalization from 2014 — I haven't read it in forever but remember it striking a chord)
Seems like around the year 2038, the superforecasters expect a doubling of annual mortality rates from engineered pandemics from 0.0021% to 0.0041% — around 1 COVID every 48 years — and a shift from ~0% to ~0.0002%/year extinction risk. The increases are assumed to persist (although there were only forecasts until 2100?).
4.2 is a cool collection of different approaches to identifying a discount rate. Ultimately the author assumes p=0.98, which is on the slightly lower end and which he flags will put more weight on near-term events.
I think p can also be understood to incorporate a kind of potential "washout" aspect of scientific progress today (if we don't discover what we would have in 2024, maybe we still mostly catch up in the next few years), although I haven't thought carefully about it.