I made a very rough spreadsheet that estimates the expected impact of work on different causes for screening purposes. For the risks, it was based on expected damage, amount of work that has been done so far, and a functional form for marginal impact. I did it from several perspectives, including conventional economic (with discounting future human utility), positive utilitarian (maximizing net utility and not discounting), and biodiversity. Note that the pure negative utilitarian of reducing aggregate suffering may prefer human extinction (I don't subscribe to this viewpoint). I believe that the future will generally be net beneficial.
Of course there has been a lot of talk recently that if one values future generations not negligibly, reducing global catastrophic risk is of overwhelming importance. But I have not seen the point that even if you do discount future generations exponentially, there could still be an overwhelming number of discounted consciousnesses if you give a non-negligible probability of computer consciousnesses this century. This is reasonable because an efficient computer consciousness would use much less energy than a typical human. Furthermore, it would not take very long to construct a traditional Dyson sphere, which is independent satellites orbiting the sun that absorb most of the sun's output. The satellites would be ~micron thick solar cells plus CPUs, and would require a small fraction of the matter in the solar system. Note that this means that even if one thinks that artificial general intelligence will be friendly, it is still of overwhelming importance to reduce the risk of not reaching the computer consciousnesses, which could just mean global catastrophic risk and technological civilization not recovering. I am open to arguments about far future trajectory changes other than global catastrophic risks, but I think they need to be developed further. This also includes potential mass animal suffering associated with galactic colonization or simulation of worlds.
Looking across these global catastrophic risks, I find the most promising are artificial intelligence alignment, molecular manufacturing, high-energy physics experiments, 100% kill engineered pandemic, global totalitarianism, and alternate foods as solutions to global agricultural disruption. Some of these are familiar, so I will focus on the less familiar ones.
Though many regard high-energy physics experiments as safe, a risk of one in ~1 billion per year of turning the earth into a strangelet or black hole or destroying the entire visible universe is still very bad. And the risk could be higher because of model error. The net benefit excluding the risk of these experiments considering all the costs I believe is quite low, so I personally think they should be banned.
Local totalitarianism like in North Korea eventually gets outcompeted. However, global totalitarianism would have no competition, and could last indefinitely. This would be bad for the people living under it, but it could also stifle future potential like galaxy colonization and artificial general intelligence. I am less familiar with the interventions to prevent this.
Global agricultural disruption could occur from risks like nuclear winter, asteroid/comet impact, super volcanic eruption, abrupt climate change, or agroterrorism. Though many of these risks have been studied significantly, there is a new class of interventions called alternate foods that do not rely on the sun (disclosure: I came up with them as catastrophic solutions). Examples include growing mushrooms on dead trees and growing edible bacteria on natural gas. I have done some modeling of the cost effectiveness of alternate food interventions, including planning, research, and development. This will hopefully be published soon, and it indicates that expected lives in the present generation can be saved at significantly lower cost than typical global poverty interventions. Furthermore, alternate foods would reduce the chance of civilization collapse, and therefore the chance that civilization does not recover.
There was earlier discussion on what might constitute a fifth area within effective altruism of effective environmentalism. I would propose that this would not be regulating pollution to save lives in developed countries at $5 million apiece. However, there are frameworks that value biodiversity highly. One could argue that we will eventually be able to reconstruct extinct species or put organisms in zoos to prevent extinction. But the safer route is to keep the species alive in the wild. In an agricultural catastrophe, not only would many species go extinct without human intervention, but desperate humans would actively eat many species to extinction. Therefore, I have estimated that the most cost-effective way of saving species is furthering alternate foods.
Overall, there are several promising causes, but I think the most promising is alternate foods. This is because it is competitive with other global catastrophic risk causes, but it has the further benefit of being more cost-effective at saving lives in the present generation than global poverty interventions, and more cost-effective at saving species than conventional interventions like buying rainforest land. I am working on a mechanism to allow people to support this cause.
Edit: here is the cost per life saved paper now that it is published.
Wow, I didn't expect to see any submissions so quickly! Thanks Dave!
A couple questions:
Do you have calculations to justify this claim?
Do you expect that computer consciousness would be net positive? Why?
You talk about preserving natural life as a good thing. Are you at all concerned about wild-animal suffering?
More generally, alternate foods is an interesting idea that I hadn't heard before, and it looks neglected among EAs (I don't know if there's much research being done on it but from what you say it sounds like there's not). I'd be happy to see what you publish on alternate food interventions.
Thanks for the questions and thanks for the opportunity to post! See link for calculations on mass requirements. They estimate that a chunk of Mercury could envelop the sun in a few years using self replicating nanotechnology. But then to get maximum computation power, you would want to use the waste heat from a shell near the sun to power another shell further out (Matrioshka Brains). We actually already have "solar" cells that work with lower temperature radiation. Making many of these shells would require a lot more mass, but it would still be feasible.
I think that computer consciousnesses could be much happier than humans. However, they could be much less happy. So I think this is important to work on, though I am optimistic overall.
My reference to reducing animal suffering in galactic colonization and ancestor simulations includes concern about wild animal suffering. I am just less concerned about the wild animal suffering that is going on now because it is much smaller quantities. However, I will note that there may be low-cost ways of reducing wild animal suffering without compromising biodiversity, by just keeping fewer number of organisms per species. For instance, there are huge number of copepods (a type of invertebrate) over large areas of the ocean. However, if we fertilize the oceans, we can have food chain that goes directly from algae to fish. This would reduce the amount of agricultural land required, probably increasing wild animal suffering on land, but it might be a win overall. This could be justified even in non-catastrophe times, but fertilizing the oceans is one of my alternate foods if there is some sunlight remaining.
Yes, very few people are working on alternate foods now. So that means the marginal impact of additional work is very high.
This is really cool, and I'm glad to see that one of the first posts in this series is on a somewhat unconventional EA idea.
I'm curious, though - you seem to conclude pretty decisively that GCRs, or at least general far future trajectory changes, are the highest-impact cause, but then you select alternate foods rather than other far future interventions because it is competitive with other GCR causes but is also good for global poverty and environmentalism. If GCRs or far future trajectory changes are overwhelmingly important, it seems you should just choose whichever intervention seems most important for the far future and call it a day. A marginal difference in impact on the far future should outweigh impact on other causes that are far less important. Of course, it could be that GCRs are only slightly more important than the other causes, in which case this calculation makes sense. But it seems like you think they're likely to be far more important, so I'm curious about how you reconcile this.
Thanks, and good points. There is a lot of uncertainty in the cost effectiveness of GCR interventions (especially given my crude framework so far - this would be much more accurate). So I would not have a lot of confidence to say that one is the best. I would have slightly more confidence to saying that the group I mentioned is likely to be more cost-effective than the GCRs I did not mention. Because I have two independent lines of reasoning pointing towards the overwhelming importance of reducing GCR, I am fairly confident in that. But I am not 100% confident, so there is an advantage of having benefits in other frameworks ("no regrets"). Also, I am particularly excited about the fact that alternate foods have an opportunity to unite some of the factions within EA.
That seems.... optimistic. Why do you believe that?
On the biological human side, since we have figured out how to grow our economies faster than our population, our standard of living has increased much beyond subsistence. Many would argue that even at subsistence, human existence was still net positive, but I think it is fairly clear that human existence in developed countries currently is net positive. In the future, barring a global catastrophe, I think we could maintain or increase our standard of living (see my second comment here).
On the computer consciousness side, it is much less straightforward. Robin Hanson has written a lot on what the future might be like if there are many competing computer consciousnesses (e.g. link). Since it is so easy to create a copy of software, he argues that the big supply of labor will reduce wages to subsistence levels, unless we somehow are able to regulate the process. I couldn't find exactly where, but I believe he argues that the subsistence levels might be quite happy. The logic went something like an optimally productive worker is generally a happy and highly motivated worker, like a workaholic.
However, if there is fast takeoff of an individual computer consciousness, that could become completely dominant. Making that a happy outcome is where MIRI comes in. I am currently pretty scared about our chances in this scenario. But now that we even have Bill Gates concerned about it (though not donating yet), I am hopeful we can improve our odds soon.
Thanks for answering. I don't really care about computer consciousnesses because I'm somewhat of a carbon chauvinist; I only care what happens to biological humans and other vertebrates who share my ancestry and brain architecture. I think the rest is just our empathy misfiring.
AI or em catastrophe would be terrible, but likely not hellish, so it would be merely a dead future, not a net-negative one.
The things I'm most concerned about are blind spots like animal suffering and political risks like irrational policies that cause more harm than benefit. If we include these, I think it's plausible there is net-negative aggregate welfare even in developed countries. Technology might change these, but I think political risks and human biases (moral blind spots) can make any innovation useless or net harmful. I don't know how to address these because I don't believe advocacy actually works.