Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey

 

I made a very rough spreadsheet that estimates the expected impact of work on different causes for screening purposes. For the risks, it was based on expected damage, amount of work that has been done so far, and a functional form for marginal impact. I did it from several perspectives, including conventional economic (with discounting future human utility), positive utilitarian (maximizing net utility and not discounting), and biodiversity. Note that the pure negative utilitarian of reducing aggregate suffering may prefer human extinction (I don't subscribe to this viewpoint). I believe that the future will generally be net beneficial. 

Of course there has been a lot of talk recently that if one values future generations not negligibly, reducing global catastrophic risk is of overwhelming importance. But I have not seen the point that even if you do discount future generations exponentially, there could still be an overwhelming number of discounted consciousnesses if you give a non-negligible probability of computer consciousnesses this century. This is reasonable because an efficient computer consciousness would use much less energy than a typical human. Furthermore, it would not take very long to construct a traditional Dyson sphere, which is independent satellites orbiting the sun that absorb most of the sun's output. The satellites would be ~micron thick solar cells plus CPUs, and would require a small fraction of the matter in the solar system. Note that this means that even if one thinks that artificial general intelligence will be friendly, it is still of overwhelming importance to reduce the risk of not reaching the computer consciousnesses, which could just mean global catastrophic risk and technological civilization not recovering. I am open to arguments about far future trajectory changes other than global catastrophic risks, but I think they need to be developed further. This also includes potential mass animal suffering associated with galactic colonization or simulation of worlds.

Looking across these global catastrophic risks, I find the most promising are artificial intelligence alignment, molecular manufacturing, high-energy physics experiments, 100% kill engineered pandemic, global totalitarianism, and alternate foods as solutions to global agricultural disruption. Some of these are familiar, so I will focus on the less familiar ones.

Though many regard high-energy physics experiments as safe, a risk of one in ~1 billion per year of turning the earth into a strangelet or black hole or destroying the entire visible universe is still very bad. And the risk could be higher because of model error. The net benefit excluding the risk of these experiments considering all the costs I believe is quite low, so I personally think they should be banned.

Local totalitarianism like in North Korea eventually gets outcompeted. However, global totalitarianism would have no competition, and could last indefinitely. This would be bad for the people living under it, but it could also stifle future potential like galaxy colonization and artificial general intelligence. I am less familiar with the interventions to prevent this.

Global agricultural disruption could occur from risks like nuclear winter, asteroid/comet impact, super volcanic eruption, abrupt climate change, or agroterrorism. Though many of these risks have been studied significantly, there is a new class of interventions called alternate foods that do not rely on the sun (disclosure: I came up with them as catastrophic solutions). Examples include growing mushrooms on dead trees and growing edible bacteria on natural gas. I have done some modeling of the cost effectiveness of alternate food interventions, including planning, research, and development. This will hopefully be published soon, and it indicates that expected lives in the present generation can be saved at significantly lower cost than typical global poverty interventions. Furthermore, alternate foods would reduce the chance of civilization collapse, and therefore the chance that civilization does not recover.

There was earlier discussion on what might constitute a fifth area within effective altruism of effective environmentalism. I would propose that this would not be regulating pollution to save lives in developed countries at $5 million apiece. However, there are frameworks that value biodiversity highly. One could argue that we will eventually be able to reconstruct extinct species or put organisms in zoos to prevent extinction. But the safer route is to keep the species alive in the wild. In an agricultural catastrophe, not only would many species go extinct without human intervention, but desperate humans would actively eat many species to extinction. Therefore, I have estimated that the most cost-effective way of saving species is furthering alternate foods.

Overall, there are several promising causes, but I think the most promising is alternate foods. This is because it is competitive with other global catastrophic risk causes, but it has the further benefit of being more cost-effective at saving lives in the present generation than global poverty interventions, and more cost-effective at saving species than conventional interventions like buying rainforest land. I am working on a mechanism to allow people to support this cause.

Edit: here is the cost per life saved paper now that it is published.

 

Comments7


Sorted by Click to highlight new comments since:

Wow, I didn't expect to see any submissions so quickly! Thanks Dave!

A couple questions:

The satellites would be ~micron thick solar cells plus CPUs, and would require a small fraction of the matter in the solar system.

Do you have calculations to justify this claim?

Do you expect that computer consciousness would be net positive? Why?

You talk about preserving natural life as a good thing. Are you at all concerned about wild-animal suffering?

More generally, alternate foods is an interesting idea that I hadn't heard before, and it looks neglected among EAs (I don't know if there's much research being done on it but from what you say it sounds like there's not). I'd be happy to see what you publish on alternate food interventions.

Thanks for the questions and thanks for the opportunity to post! See link for calculations on mass requirements. They estimate that a chunk of Mercury could envelop the sun in a few years using self replicating nanotechnology. But then to get maximum computation power, you would want to use the waste heat from a shell near the sun to power another shell further out (Matrioshka Brains). We actually already have "solar" cells that work with lower temperature radiation. Making many of these shells would require a lot more mass, but it would still be feasible.

I think that computer consciousnesses could be much happier than humans. However, they could be much less happy. So I think this is important to work on, though I am optimistic overall.

My reference to reducing animal suffering in galactic colonization and ancestor simulations includes concern about wild animal suffering. I am just less concerned about the wild animal suffering that is going on now because it is much smaller quantities. However, I will note that there may be low-cost ways of reducing wild animal suffering without compromising biodiversity, by just keeping fewer number of organisms per species. For instance, there are huge number of copepods (a type of invertebrate) over large areas of the ocean. However, if we fertilize the oceans, we can have food chain that goes directly from algae to fish. This would reduce the amount of agricultural land required, probably increasing wild animal suffering on land, but it might be a win overall. This could be justified even in non-catastrophe times, but fertilizing the oceans is one of my alternate foods if there is some sunlight remaining.

Yes, very few people are working on alternate foods now. So that means the marginal impact of additional work is very high.

This is really cool, and I'm glad to see that one of the first posts in this series is on a somewhat unconventional EA idea.

I'm curious, though - you seem to conclude pretty decisively that GCRs, or at least general far future trajectory changes, are the highest-impact cause, but then you select alternate foods rather than other far future interventions because it is competitive with other GCR causes but is also good for global poverty and environmentalism. If GCRs or far future trajectory changes are overwhelmingly important, it seems you should just choose whichever intervention seems most important for the far future and call it a day. A marginal difference in impact on the far future should outweigh impact on other causes that are far less important. Of course, it could be that GCRs are only slightly more important than the other causes, in which case this calculation makes sense. But it seems like you think they're likely to be far more important, so I'm curious about how you reconcile this.

Thanks, and good points. There is a lot of uncertainty in the cost effectiveness of GCR interventions (especially given my crude framework so far - this would be much more accurate). So I would not have a lot of confidence to say that one is the best. I would have slightly more confidence to saying that the group I mentioned is likely to be more cost-effective than the GCRs I did not mention. Because I have two independent lines of reasoning pointing towards the overwhelming importance of reducing GCR, I am fairly confident in that. But I am not 100% confident, so there is an advantage of having benefits in other frameworks ("no regrets"). Also, I am particularly excited about the fact that alternate foods have an opportunity to unite some of the factions within EA.

I believe that the future will generally be net beneficial.

That seems.... optimistic. Why do you believe that?

On the biological human side, since we have figured out how to grow our economies faster than our population, our standard of living has increased much beyond subsistence. Many would argue that even at subsistence, human existence was still net positive, but I think it is fairly clear that human existence in developed countries currently is net positive. In the future, barring a global catastrophe, I think we could maintain or increase our standard of living (see my second comment here).

On the computer consciousness side, it is much less straightforward. Robin Hanson has written a lot on what the future might be like if there are many competing computer consciousnesses (e.g. link). Since it is so easy to create a copy of software, he argues that the big supply of labor will reduce wages to subsistence levels, unless we somehow are able to regulate the process. I couldn't find exactly where, but I believe he argues that the subsistence levels might be quite happy. The logic went something like an optimally productive worker is generally a happy and highly motivated worker, like a workaholic.

However, if there is fast takeoff of an individual computer consciousness, that could become completely dominant. Making that a happy outcome is where MIRI comes in. I am currently pretty scared about our chances in this scenario. But now that we even have Bill Gates concerned about it (though not donating yet), I am hopeful we can improve our odds soon.

Thanks for answering. I don't really care about computer consciousnesses because I'm somewhat of a carbon chauvinist; I only care what happens to biological humans and other vertebrates who share my ancestry and brain architecture. I think the rest is just our empathy misfiring.

AI or em catastrophe would be terrible, but likely not hellish, so it would be merely a dead future, not a net-negative one.

The things I'm most concerned about are blind spots like animal suffering and political risks like irrational policies that cause more harm than benefit. If we include these, I think it's plausible there is net-negative aggregate welfare even in developed countries. Technology might change these, but I think political risks and human biases (moral blind spots) can make any innovation useless or net harmful. I don't know how to address these because I don't believe advocacy actually works.

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4