Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don't know why you would think that.
Â
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I'm not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,Â
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
You said:
I see no evidence that effective altruism is any better at being unbiased than anyone else.
Â
So that's why I compared to non-EAs. But ok, let's compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pushes you to be precisely wrong, rather than approximately correct. P hacking is a big deal in social sciences academia, but not really in engineering, and I think EAs are more aware of the issues than the average academic. Of course there's a lot of diversity within EA and on the EA Forum. Perhaps one comparison that could be made is the accuracy of domain experts versus super forecasters (many of whom are EAs and it's similar to the common EA condition of a better calibrated generalist). I've heard people argue both sides, but I would say they are similar accuracy in most domains. I think that EA is quicker to update, one example being taking COVID seriously in January and February 2020-at least my experience much more seriously than academia was taking it. As for the XPT, I'm not sure how to characterize it, because I would guess that the higher percent of the GCR experts were EA than of the super forecasters. At least in AI, the experts had a better track record of predicting faster AI progress, which was generally the position of EAs. As for getting to the truth in new areas, academia has some reputation for discussing strange new ideas, but I think EA is significantly more open to it. It has certainly been my experience publishing in AI and other GCR (and other people's experience publishing in AI) that it's very hard to find a journal fit for strange new ideas (e.g. versus publishing in energy, which I've published dozens of papers on as well). I think this is an extremely important part of epistemics. I think the best combination is subject matter expertise combined with techniques to reduce bias. And I think you get that combination with subject matter experts in EA. And it's true that many other EAs then defer to those EAs who have expertise in the particular field (maybe you disagree with what counts as subject matter expertise and association with EA/using techniques to reduce bias, but I would count people like Yudkowsky, Christiano, Shulman, Bostrom, Cotra, Kokotajlo, and Hanson (though he has long timelines)). So I guess I would expect that on the question of AI timelines, the average EA would be more accurate than the average academic in AI.
Daniel said "I would say that there’s like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years."
It might have been Carl on the Dwarkesh podcast, but I couldn't easily find a transcript. But I've heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesn't get us there, or because we can't keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?Â
Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some people think the world could have ended up totalitarian if World War II had gone differently.
What do you think of the Arch Mission Foundation's Nanofiche archive on the Moon?
I don't think it's the most cost-effective way of mitigating X risk, but I guess you could think of it as plan F:
Plan A: prevent catastrophes
Plan B: contain catastrophes (e.g. not escalating nuclear war or suppressing an extreme pandemic)
Plan C: resilience despite the catastrophe getting very bad (e.g. maintaining civilization despite blocking of sun or collapse of infrastructure because of employee pandemic fear)
Plan D: recover from collapse of civilization
Plan E: refuge in case everyone else died
Plan F: resurrect civilization
I have personally never bought the idea of “value lock-in” for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built.Â
I think value lock-in does not depend on the MIRI worldview - here's a relevant article.
The movie's reviews and ratings have been hurt by its rather frustrating ending, but I think that's unfair to its overall dramatic excellence.
The link didn't work.Â
Spoilers: the fatality estimate is ~1 order of magnitude too high. It's true that if there are lots of nukes headed towards your missile silos, there is great urgency to launch before being destroyed. However, there is not that urgency to launch if a city is targeted, so it seemed contrived. I was not aware that ground based interceptors have to physically hit the ICBM, instead of having an explosion that could relax the targeting accuracy.Â
I agree that extinction has been overemphasized in the discussion of existential risk. I would add that it's not just irrecoverable collapse, but the potential increased risk of subsequent global totalitarianism or worse values ending up in AI. Here are some papers that I have been on that have addressed some of these issues: 1, 2, 3, 4. And here is another relevant paper: 1, and very relevant project 2.
I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI.
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGI - I personally have about 10 publications related to AI.
Â
Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/alignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so what's going on here?)
I agree that those links are examples of not good epistemics. But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here). I would be interested in your overall letter grades for epistemics. My quick take would be:
Ideal: A+
Less Wrong: A
EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating)
Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.)
In-person dialog outside these spaces: C
Online dialog outside these spaces: D
Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.
Shameless plug for ALLFED: Four of our former volunteers moved into paid work in biosecurity, and they were volunteers before we did much direct work in biosecurity. Now we are doing more directly. Since ALLFED has had to shrink, the contribution from volunteers has become relatively more important. So I think ALLFED is a good place for young people to skill up in biosecurity and have impact.