Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
How others can help me
Referring potential volunteers, workers, board members and donors to ALLFED.
How I can help others
Being effective in academia, balancing direct work and earning to give, time management.
Microalgae is fairly expensive, so I think macroalgae is more promising - most of it is low protein, but there are high protein varieties. Leaf protein concentrate (e.g. Leaft) seems promising as well.
Plant-based meat prices per pound are based on frozen and refrigerated plant-based meat subcategories from SPINS year ending 12/1/24. Animal-based meat prices per pound are based on data for fresh meat subcategories from the Circana year ending Dec. 2024.
Fresh meat typically costs more, and it seems like this includes whole muscle meat, so I think if you do a fair comparison, PBM is more like double the cost of ground beef.
Nice! Did you consider seaweed or leaf protein concentrate? The numbers I've seen is that PBM is still twice the price of ground beef - did that source compare to all beef?
I haven’t dug into the surveys that Knight cites but I’m super skeptical. I know vegans who don’t have vegan pets, and I know how hard it is to make people go vegan. There are big barriers to getting humans to transition to alternative proteins at scale, and that’s only more true for companion animals.
Â
I'm skeptical as well, but in some ways, the barriers for pets going vegan are lower:
Taste is less of an issue for pets.
Time cost is much lower for pets because you can just pick out one food and buy it every time.
For people concerned about social interactions involving veganism, you don't have to tell anyone that your pet is vegan.
It may be easier to mitigate the health issues of being vegan for pets: For methane single cell protein (SCP) fed to salmon, just a little compared to fully vegan (soy) diet showed a big improvement in gut health. I'd be most confident that this would port to other obligate carnivores like cats, but I could see it being beneficial for dogs as well. Methane SCP is not yet approved for human food, but they are targeting pet food.
In the last few decades, dog food has become more plant based because plants are cheaper (and they figured out how to make it appealing to dogs and not offensive to people). If methane SCP can become cheaper than animal byproducts, you could have a healthy cheaper product with lower environmental impact that probably wouldn't taste as good, but I think many non-vegans would go for.
I personally do think the probability of eventual disempowerment is high. However, you are implying that it is 100%. If it is 99%, or indeed even 99.9999999%, and one thinks the value of the future is significantly higher with humanity (not necessarily biological humans) in control vs AI, then there are still astronomical stakes of humanity remaining in control.Â
Let  be the number of parameters in the model,  be the number of data tokens it is trained on,  be the number of times the model is deployed (e.g. the number of questions it is asked) and  be the number of inference steps each time it is deployed (e.g. the number of tokens per answer). Then this approximately works out to:[9]
Note that scaling up the number of parameters, , increases both pre-training compute and inference compute, because you need to use those parameters each time you run a forward pass in your model.
If AI systems replace humanity, that outcome would undoubtedly be an absolute disaster for the eight billion human beings currently alive on Earth. However, it would be a localized, short-term disaster rather than an astronomical one. Bostrom's argument, strictly interpreted, no longer applies to this situation. The reason is that the risk is confined to the present generation of humans: the question at stake is simply whether the eight billion people alive today will be killed or allowed to continue living. Even if you accept that killing eight billion people would be an extraordinarily terrible outcome, it does not automatically follow that this harm carries the same moral weight as a catastrophe that permanently eliminates the possibility of 10^23 future lives.
This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion)). Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower. But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Do we need a scared reaction option on the EA Forum?