Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
Though Carl said that an unilateral pause would be riskier, I'm pretty sure he is not supporting a universal pause now. He said "To the extent you have a willingness to do a pause, it’s going to be much more impactful later on. And even worse, it’s possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And I’m enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see what’s happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didn’t I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit — or harm. At the object level, when I think it’s probably bad on the merits, it doesn’t arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible — like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: that’s something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives — and that are, I think, more helpful to pursue at the margin than an early pause."
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
They aren’t the ones terrified of not having the vision to go for the Singularity, of being seen as “Luddites” for opposing a dangerous and recklessly pursued technology. Frankly they aren’t the influential ones.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
Though I think this is too harsh as early thinking in AI Safety included Bostrom's differential technological development and MIRI's seed (safe) AI, the former of which is similar to people trying to shape Anthropic's work and the latter of which could be characterized as accelerationist.
Have you seen any polls on this? I would guess that the majority of EAs and rationalists would not support a pause (because they think it would increase overall risk, at least at this point, e.g. Carl Shulman), but they would also generally not be supportive of what the labs are doing (racing, opposing regulations, etc).
This has always been the policy of the Giving Pledge:
Since the very beginning of the Giving Pledge, it has focused on those with a net worth of at least one billion dollars (or who would be billionaires if not for their giving) due to the enormous potential of the resources they can deploy.
I think a key consideration here is whether AI disempowerment of humans, where humans are at least as well off as now, counts as X risk (and, as an aside, P(doom)). Since it would be destruction of humanity's long-term potential, I think Bostrom would say that disempowerment is an X risk, but Ord may not.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it's also possible to advocate for pausing when some threshold or trigger is hit, and not now. It's also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I don't think this characterization is an effective way of shifting people towards your point of view.