D

Denkenberger

Director, Associate Professor @ Alliance to Feed the Earth in Disasters (ALLFED), University of Canterbury
2470 karmaJoined Working (6-15 years)Christchurch, New Zealand

Bio

Participation
2

Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4000 citations, >50,000 downloads, h-index = 32, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 200 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

How others can help me

Referring potential volunteers, workers, board members and donors to ALLFED.

How I can help others

Being effective in academia, balancing direct work and earning to give, time management.

Comments
654

Existential catastrophe, annual0.30%20.04%David Denkenberger, 2018
Existential catastrophe, annual0.10%3.85%Anders Sandberg, 2018

 

 

 

 

You mentioned how some of the risks in the table were for extinction, rather than existential risk. However, the above two were for the reduction in long-term future potential, which could include trajectory changes that do not qualify as existential risk, such as slightly worse values ending up in locked-in AI. Also another source by this definition was the 30% reduction in long-term potential from 80,000 Hours' earlier version of this profile. By the way, the source attributed to me was based on a poll of GCR researchers - my own estimate is lower.

The conventional wisdom is that a crisis like this leads to a panic-neglect cycle, where we oversupply caution for a while, but can’t keep it up. This was the expectation of many people in biosecurity, with the main strategy being about making sure the response wasn’t too narrowly focused on a re-run of Covid, instead covering a wide range of possible pandemics, and that the funding was ring-fenced so that it couldn’t be funnelled away to other issues when the memory of this tragedy began to fade.But we didn’t even see a panic stage: spending on biodefense for future pandemics was disappointingly weak in the UK and even worse in the US.

Have you seen data on spending for future pandemics before COVID and after?

We do not claim to be an x-risk cause area.

 

I think that’s reasonable that biodiversity loss is unlikely to be an existential risk. However, existential risks could significantly impact biodiversity. Abrupt sunlight reduction scenarios such as nuclear winter could cause extinction of species in the wild, which could potentially be mitigated by keeping the species alive in zoos if there were sufficient food. These catastrophes plus other catastrophes such as those that disrupt infrastructure like extreme pandemic causing people to be too fearful to show up to work in critical industries, could cause desperate people hunting species to extinction. But I think the biggest threat is AGI, which could wipe out all biodiversity. Then again, if AGI goes well, it may be able to resurrect extinct species. So it could be that the most cost-effective way of preserving biodiversity is working on AGI safety.

We are deeply saddened to hear the news of the passing of Marisa, a valued former volunteer of ALLFED. Marisa’s dedication and contributions touched many lives and made an impact on our community. Our heartfelt condolences go out to her family and friends at this time. 

I think that saving lives in a catastrophe could have more flow-through effects, such as preventing collapse of civilization (from which we may not recover), reducing the likelihood of global totalitarianism, and reducing the trauma of the catastrophe, perhaps resulting in better values ending up in AGI.

I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).

I have been recently asking around whether someone has compiled how much money is going into different ways of mitigating GCBRs, so this is quite relevant! Do you have estimates of the current EA (or otherwise) spending in these or similar buckets?

  1. Prevention: AI misuse, DNA synthesis screening, etc
  2. Suppression: Pathogen-agnostic early warning, planning for rapid response lockdowns, etc
  3. Containment: UV systems, P4E stockpiling, plans for keeping vital workers onsite, backup plans for providing food, energy and water non-industrially with low human contact, etc
  4. Medical countermeasures: Platform technologies for medical countermeasures, etc
  5. Detection for stealth pandemics: Different pathogen-agnostic early warning?

I think this is a very valuable project.

But this is still a combination of two questions, the latter of which longtermists have never, to my knowledge, considered probabilistically:[3]

  • What is the probability that the event kills all living humans?
  • What effect does the event otherwise have on the probability that we eventually reach an interstellar/existentially secure state, [4] given the possibility of multiple civilisational collapses and ‘reboots’? (where the first reboot is the second civilisation)

3^
The closest thing I know to such an attempt is Luisa Rodriguez’s post What is the likelihood that civilizational collapse would cause technological stagnation? (outdated research), in which she gives some specific probabilities of the chance of a preagricultural civilisation recovering industry based on a grid of extinction rates and scenarios which, after researching the subject, she found reasonably plausible. But this relates only to a single instance of trying to do this (on my reading, specifically the first time, since she imagines the North Antelope Rochelle Coal Mine still having reserves), and only progresses us approximately as far as early 19th century England. Also, per the title’s addendum, she now considers the conclusion too optimistic, but doesn’t feel comfortable giving a quantified update.

I also have not seen analyses of multiple reboots. But in terms of recovery from one loss of civilization, What We Owe the Future touches on it some. Also, my original cost-effectiveness analysis for the long-term future for nuclear war explicitly modeled recovery from collapse. However, then I realized that there were other mechanisms to long-term future impact, such as making global totalitarianism more likely or resulting in worse values in AGI, so I moved to reduction in long-term future value associated with nuclear war or other catastrophes. I like that you are breaking this up into more terms and more reboots, because I think that will result in more accurate modeling.

Load more