Hide table of contents

Currently, I'm pursuing a bachelor degree in Biological Sciences in order to become a researcher in the area of biorisk, because I was confident that humanity would stop causing tremendous amounts of suffering upon other animals and would assume a net positive value in the future. 

However, there was a nagging thought in the back of my head about the possibility that it would not do so, and I found this article suggesting that there is a real possibility that such horrible scenario might actually happen.

If there is indeed a very considerable chance that humanity will keep torturing animals at an ever growing scale, and thus keep having a negative net-value for an extremely large portion of its history, doesn't that mean that we should strive to make humanity more  likely to go extinct, not less?

New Answer
New Comment


7 Answers sorted by

This is the question. I agree with finm that we should stay alive since: 1) we just might figure out a way to stop the mass suffering, and 2) we just might develop the intention to do something about it. 

To add on a third, point, I would say: 3) if humanity goes extinct, then there is a possibility that either: 

  • a) no other species capable of humanity's intelligence and empathy ever comes into being, whereas nature stays on, thus guaranteeing mass suffering until the end of the universe; or 
  • b) even if another another species like humanity (or humanity itself) emerges, that would require hundreds of millions of years, during which sentient beings would suffer.

So I'm of the belief that humanity should be kept alive, because it is the only—albeit small—specter of hope for sentient beings. Now, I am a bit more hopeful than you, simply because within the span of a mere 4000 years of civilization (which is a blink of an eye in the grand scheme of things), humanity has, in many places: 

  • recognized the evil of slavery, caste system, etc.; 
  • outlawed discrimination on the basis of race, ethnicity, sex; 
  • done away with the belief that war is "glorious"; 
  • even passed laws outlawing certain practices against animals (California's Proposition 12); 
  • actually tried to realize utopia (ex. French and Russian Revolutions, etc.) (even though they failed spectacularly)

Vive humanity! Well, of course we have done as much—if not much more—horrible things to each other and to animals, but ultimately...  upon whom else can we rest our hopes, my friend?

I agree that, right now, we're partly in the dark about whether the future will be good if humanity survives. But if humanity survives, and continues to commit moral crimes, then there will still be humans around to notice that problem. And I expect that those humans will be better informed about (i) ways to end those moral crimes, and (ii) the chance those efforts will eventually succeed.

If future efforts to end moral crimes succeed, then of course it would be a great mistake to go extinct before that point. But even for the information value of knowing more about the prospects for humans and animals (and everything else that matters), it seems well worth staying alive.

I'm not convinced that the chances that efforts to end factory farming will (by default) become more likely to succeed over time - what's your thinking behind this? Given the current trajectory of society (below), whilst I'm hopeful that is the case, it's far from what I would expect. For example, I can imagine the "defensive capabilities" of the actors trying to uphold factory farming improve at the same or faster rate relative to the capabilities of farmed animal advocates.

Additionally, I'm not sure that the information value about our future prospects, by the simple statement, outweighs the suffering of trillions of animals over coming decades. This feels like a statement that is easy for us to make as humans, who largely aren't subject to suffering as intense as faced by many farmed animals, but it might be different if we thought about this from behind a veil of ignorance where the likely outcome for a sentient being as a life of imprisonment and pain. 

3
finm
Thanks, I think both those points make sense. On the second point about value of information, the future for animals without humans would likely still be bad (because of wild animal suffering), and a future with humans could be less bad for animals (because we alleviate both wild and farmed animal suffering). So I don' think it's necessarily true that something as abstract as ‘a clearer picture of the future’ can't be worth the price of present animal suffering, since one of the upshots of learning that picture might be to choose to live on and reduce overall animal suffering over the long run. Although of course you could just be very sceptical that the information value alone would be enough to justify another ⩾ half-century of animal suffering (and it certainly shouldn't be used to excuse to wait around and not do things to urgently reduce that suffering). Though I don't know exactly what you're pointing at re “defensive capabilities” of factory farming. I also think I share your short-term (say, ⩽ 25-year) pessimism about farmed animals. But in the longer run, I think there are some reasons for hope (if alt proteins get much cheaper and better, if humans do eventually decide to move away from animal agriculture for roughly ethical reasons, despite the track record of activism so far). Of course there is a question of what to do if you are much more pessimistic even over the long-run for animal (or nonhuman) welfare. Even here, if “cause the end of human civilisation” were a serious option, I'd be very surprised if there weren't many other serious options available to end factory farming without also causing the worst calamity ever. (Don't mean to represent you as taking a stand on whether extinction would be good fwiw)

Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.

A helpful comment from a while back: https://forum.effectivealtruism.org/posts/rRpDeniy9FBmAwMqr/arguments-for-why-preventing-human-extinction-is-wrong?commentId=fPcdCpAgsmTobjJRB

Personally, I suspect there's a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it's likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.

No, there is no way to be confident. 

I think humanity is intellectually on a trajectory towards greater concern for non-human animals. But this is not a reliable argument. Trajectories can reverse or stall, and most of the world is likely to remain, at best, indifferent to and complicit in the increasing suffering of farmed animals for decades to come. We could easily "lock in" our (fairly horrific) modern norms.

But I think we should probably still lean towards preventing human extinction. 

The main reason for this is the pursuit of convergent goals

It's just way harder to integrate pro-extinction actions into the other things that we care about and are trying to do as a movement. 

We care about making people and animals healthier and happier, avoiding mass suffering events / pandemics / global conflict, improving global institutions, and pursuing moral progress. There are many actions that can improve these metrics - reducing pandemic risk, making AI safer, supporting global development, preventing great power conflict - which also tend to reduce extinction risk. But there are very few things we can do that improve these metrics while increasing x-risk. 

Even if extinction itself would be positive expected value, trying to make humans go extinct is a bit all-or-nothing, and you probably won't ever be presented with a choice where x-risk is the only variable at play. Most of the things you can do that increase human x-risk at the margins also probably increase the chance of other bad things happening. This means that there are very few actions that you could take with a view towards increasing x-risk that are positive expected value.

I know this is hardly a rousing argument to inspire you in your career in biorisk, but I think it should at least help you guard against taking a stronger pro-extinction view. 

If humans go extinct, surely wild animal suffering will continue for another billion years on Earth - and there are a lot more wild animals than farmed animals. If we survive, we can continue the work of improving the lives of both farmed and wild animals.

Unfortunately it is not worth the risk of us spreading wild animals throughout the galaxies. Then there’s the fact we might torture digital beings.

Utilitarians aware of the cosmic endowment, at least, can take comfort in the fact that the prospect of quadrillions of animals suffering isn't even a feather in the scales. They shut up and multiply.

(Many others should also hope humanity doesn't go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)

I worry about this line of reasoning because it's ends-justify-the-means thinking.

Let's say billions of people were being tortured right now, and some longtermists wrote about how this isn't even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian's articles about how SBF's naive utilitarianism is alive and well in EA.

The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn't a difference in the actual badness.

Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let's say we ignore factory farming, and then there's a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I'm not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.

I'm skeptical that humans will ever realize the full cosmic endowment, and that even if we do, the future will be positive for most of the quintillions of beings involved.

First, as this video discusses, it may be difficult to spread beyond our own star system, because habitable planets may be few and far between. The prospect of finding a few habitable planets might not justify the expense of sending generation ships (even ones populated with digital minds) out into deep space to search for them. And since Earth will remain habitable for the next billion y... (read more)

Thanks for the comment, Zach. I upvoted it.

I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:

  • If extinction risk is reduced in absolute terms by 10^-10, and the value of the future is 10^50 lives, then one would save 10^40 (= 10^(50 - 10)) lives.
  • However, animal welfare or global health and development interventions have an astronomically low impact compar
... (read more)

To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.

Other ways to answer the latter question about human extinction could be:

  • That humanity is positive (if human moral value is taken be larger than the effect on animals)
  • That humanity is net-positive (if the total effect of humanity is positive, most likely because of belief that wild-animal suffering is even worse)
  • Option value, or the belief that humanity has the capacity to change (as others have stated)

In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)

  1. ^

    Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds

Comments2
Sorted by Click to highlight new comments since:

It only definitely follows from humans being net negative in expectation that we should try to make humans go extinct if you are both a full utilitarian and  "naive" about it, i.e. prepared to break usually sacrosanct moral rules when you personally judge that to be likely to have the best consequences, something which most utilitarians take to be likely to usually result in bad consequences and therefore to be discouraged.  Another way to describe 'make humanity more  likely to go extinct' is 'murder more people than all the worst dictators in history combined'. That is the sort of thing that is going to be look like a prime candidate for "do not do this, even if it has the best consequences' on non-utilitarian moral views. And it's also obviously breaking standard moral rules. 

I don't have a good answer to this, but I did read a blog post recently which might be relevant. In it, two philosophers summarize their paper which argues against drawing the conclusion that longtermists should hasten extinction rather then preventing it. (The instigation of their paper was this paper by Richard Pettigrew which argued that longtermism should be highly risk-averse. I realize that this is a slightly separate question, but the discussion seems relevant.) Hope this helps! 

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies