Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else; and the military (and whole economy) can in principle be aligned to a single person.
Â
Even if labour becomes worthless, many people own investments, and Foresight Institute has this interesting idea of "Capital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital."
I think it makes a lot of sense to examine alternate scenarios. Commenting on tool AI:
Nearly every expert interviewed for this project preferred this kind of "Tool AI" future, at least
for the near term
This is very interesting, because banning AI agents had little support on my LessWrong survey and there was only one vote for it out of 39 on the EA forum survey I ran. To be fair, this implies banning forever, so if it were temporary, there might be more support.
Capital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital.
I think this is very important because people often point out that humans will not have influence/income if they don't have a labor wage, but they could still have influence/income through ownership of capital.
You mention how poverty would still be a problem. However, I think if AI starts to automate knowledge work, the increased demand for physical jobs should lift most people out of poverty (at least until robots fill nearly all those jobs).
Â
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let's assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, "How much does existential risk from AI decrease by delaying it?" If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
We should slow AI down
I think we should weigh reducing AI risk by slowing it down against other continuing sources of X-risk. I'm also concerned about a pause becoming permanent, or increasing risk when unpaused, or only getting one chance to pause. However, if AI progress is much faster than now, I think a pause could increase the expected value of the long-run future.
If there is nuclear war without nuclear winter, there would be a dramatic loss of industrial capability which would cascade through the global system. However, being prepared to scale up alternatives such as wood gas powered vehicles producing electricity would significantly speed recovery time and reduce mortality. I think if there is less people killing each other over scarce resources, values would be better, so global totalitarianism would be less likely and bad values locked into AI would be less likely. Similarly, if there is nuclear winter, I think the default is countries banning trade and fighting over limited food. But if countries realized they could feed everyone if they cooperated, I think cooperation is more likely and that would result in better values for the future.
For a pandemic, I think being ready to scale up disease transmission interventions very quickly, including UV, in room air filtration, ventilation, glycol, and temporary working housing would make the outcome of the pandemic far better. Even if those don't work and there is a collapse of electricity/industry due to the pandemic, again being able to do backup ways of meeting basic needs like heating, food, and water[1]Â would likely result in better values for the future.
Then there is the factor that resilience makes collapse of civilization less likely. There's a lot of uncertainty of whether values would be better or worse the second time around, but I think values are pretty good now compared to what we could have, so it seems like not losing civilization would be a net benefit for the long-term (and obviously a net benefit for the short term).
Paper about to be submitted.
I agree that flourishing is very important. I have thought since around 2018 that the largest advantage for the long-term future of resilience to global catastrophes is not preventing extinction, but instead increasing flourishing, such as reducing the chance of other existential catastrophes like global totalitarianism, or making it more likely that better values end up in AI.
At some point, I had to face the fact that I’d wasted years of my life. EA and rationality, at their core (at least from a predictive perspective), were about getting money and living forever. Other values were always secondary. There are exceptions, Yudkowsky seems to have passed the Ring Temptation test, but they’re rare. I tried to salvage something. I gave it one last shot and went to LessOnline/Manifest. If you pressed people even a little, they mostly admitted that their motivations were money and power.
I'm sorry you feel this way. Though I would still disagree with you, I think you mean to say the part of EA focused on AI has a primary motivation of getting money and living forever. The majority of EAs are not focused on AI, and are instead focused on nuclear, bio risk, global health and development, animal welfare, etc and they generally are not motivated by living forever. Those who are doing direct work in these areas nearly all do so on low salaries.
Commenting on d/acc:
While it's possible that batteries get much cheaper, right now they are prohibitively expensive for days worth of storage. There are low-cost options at large scale, including compressed air energy storage and pumped hydropower, and there may be reasonable cost versions involving air at smaller scale, such as the systems that liquefy air.
Â
If by container farms you mean using artificial light, that's very inefficient and expensive.
Â
I think some scientists would, but most would prefer to specialize.
Â
I'm a fan of vehicle to grid, where vehicles with some formal electric drive can provide grid services including backup power.
Â
I've done some research on nuclear micro reactors, and I think there is potential for isolated areas like in Alaska where they have to ship diesel in. But I think it's going to be difficult to be competitive with bulk power.
Â
I agree with this.
Â
That sounds reasonable.