Hide table of contents

Many (e.g. Parfit, Bostrom, and MacAskill) have convincingly argued that we should care for future people (longtermism) and thus extinction is as bad as the loss of 10^35 lives or possibly much more bc there might be 10^35 humans yet to be born.

I believe with medium confidence, these numbers are far too high and that when fertility patterns are fully accounted for, 10^35 might become 10^10—approximately the current human population. I believe with much stronger confidence that EAs should be explicit about the assumptions underlying numbers like 10^35 because concern for future people is necessary but insufficient for such claims.

I first defend these claims before offering some ancillary thoughts about implications of longtermism EAs should take more seriously.

 

Extinction isn’t that much worse than 1 death

The main point is that if you kill a random person, you kill off the rest of the rest of their descendants too. And since the average person is responsible for ~10^35/(current human population) of the future lives, their death is ~10^10 times less bad than extinction. 

The general response to this is a form of Malthusianism—that after a death, human population regains its level since fertility increases. Given that current fertility rates are below 2 in much of the developed world, I have low confidence this claim is true. More importantly, you need high credence in a type of Malthusianism to bump up the 10^10 number significantly. If Malthusianism is 99% likely to be correct, extinction is only 10^12 times worse than one death--if X is harm of extinction and X is arbitrarily large: there is a 99% chance you can treat one death as infinitely less bad as extinction but a 1% chance it’s 10^10 times worse and 0.99(0 * X) + 0.01(1/10^10 * X) = 1/10^12 * X.

There are many other claims one could make regarding the above. Some popular ones include digital people, simulated lives, and artificial uteruses. I don’t have developed thoughts on how these technologies interact with fertility rates. The same point about needing high credence from above does apply though. And more importantly, if any of these or other claims are the lynchpin for arguments about why extinction should be a main priority, EAs should make the point more explicitly because none of these claims is that obvious. Even Malthusianism type claims should be made more explicit.

Finally, I think arguments for why extinction might be less than 10^10 times worse are often ignored. I’ll point out two. First, it seems that people can have large positive externalities on others’s lives and also future people’s lives by sharing ideas; less people means the externality from each life is less. Second, insecurity that might result from seeing another’s death might lower fertility and thus lower future lives.

Other Implications of longtermism

I'd like to end by zooming out on longtermism as a whole. The idea that future people matter is a powerful claim and opens a deep rabbit hole. In my view, EAs have found the first exit out of the rabbit hole—that extinction might be really bad—and left even more unintuitive implications buried below.

A few of these:

  1. Fertility might be an important cause area. If you can raise the fertility rate by 1% for one generation, you increase total future population by 1%, if you assume away Malthusianism and similar claims. If you can affect a longterm shift in fertility rates (for example, through genetic editing), you could do much, much better— 100% x [1.01^n - 1] times better, where n is the number of future generations, which is a very large number.
  2. Maybe we should prioritize young lives over older lives. Under longtermism, the main value most people have is their progeny. If there are 10^35 more people left to live, saving the life of someone who will have kids is > 10^25 times more valuable than saving the life of someone who won’t.
  3. Abortion might be a great evil. See 1…no matter your view on whether an unborn baby is a life, banning abortion could easily affect a significant and longterm increase in the fertility rate.

6

1
2

Reactions

1
2

More posts like this

Comments7


Sorted by Click to highlight new comments since:

I think your calculations must be wrong somewhere, although I can't quite follow them well enough to see exactly where. 

If you have a 10% credence in Malthusianism, then the expected badness of extinction is 0.1*10^35, or whatever value you think a big future is. That's still a lot closer to 10^35 times the badness of one death than 10^10 times.

Does that seem right?

No, because you have to compare the two harms. 

Take the number of future lives as N and current population as C


Extinction is as bad as N lives lost.


One death is w/ 10% credence only approx as bad as 1 death bc Malthusianism. But w/ 90% credence, it is as bad as N/C lives lost.


So, plugging in 10^35 as N and 10^10 as C, EV of one death is 1 (.1) + N/C (0.9) ~ N/C * 0.9 ~ 9e24, 11 times worse than extinction.


In general, if you have credence p, extinction becomes 10^10*1/(1-p) worse than one death.

Ah nice, thanks for explaining! I'm not following all the calculations still, but that's on me, and I think they're probably right.

But I don't think your argument is actually that relevant to what we should do, even if it's right. That's because we don't care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes. 

Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.

In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That's compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the "save half the people option".

But the expected fraction of people saved by the options is quite different. The "save half" option always results in half being saved. And the expected value of the "save one" option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it's basically irrelevant - expected value is the relevant thing. 

What do you think? I might well have made a mistake, or misunderstood still.

Hmm, I’m not sure Iunderstand your point so maybe let me add some more numbers to what I’m saying and you could say if you think your point is responsive? 

What I think you’re saying is that I’m estimating E[value saving one life / value stopping extinction] rather than E[value of saving one life] / E[value of stopping extinction]. I think this is wrong and that I’m doing the latter.

I start from the premise of we want to save in expectation most lives (current and future are equivalent). Let’s say I have two options…I can prevent extinction or directly stop a random living person from dying. Assume there are 10^35 (I just want N >> C) future lives and there are 10^10 current lives. Now assume I believe there is a 99% chance that when I save this one life, fertility in the future somehow goes up such that the individual’s progeny are replaced, but there’s a 1% chance the individual’s progeny is not replaced. The individual is responsible for 10^35/10^10 =10^25 progeny. This gives E[stopping random living person from dying] ~ 1%*10^25 =10^23.

And we’d agree E[preventing extinction] = 10^35. So E[value of saving one life] / E[value of stopping extinction] ~ 10^-12.

Interestingly E[value of saving one life / value of stopping extinction] is the same in this case because the denominator is just a constant random variable…though E[value of stopping extinction/value of saving one life] is very very large (much larger than 10^12).

Thanks, this back and forth is very helpful. I think I've got a clearer idea about what you're saying. 

I think I disagree that it's reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I'd expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.

So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.

E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it's false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.

E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it's false. But you don't need that high a credence to get an EV around 10^35.

So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there's a somewhat bigger gap than "just" 10^10. I'm not sure how much bigger. 

What do you think about that?

Edit: I misread and thought you were saying non-Malthusian worlds had more lives at first; realized you said the opposite, so we're saying the same thing and we agree. Will have to do more math about this.

This is an interesting point that I hadn't considered! I think you're right that non-Malthusian futures are much larger than Malthusian futures in some cases...though if i.e. the "Malthusian" constraint is digital lives or such, not sure.

I think the argument you make actually cuts the other way. That to go back to the expected value...the case where the single death is deriving its EV from is precisely the non-Malthusian scenarios (when its progeny is not replaced by future progeny) so its EV actually remains the same. The extinction EV is the one that reduces...so you'll actually get a number much less than 10^10 if you have high credence that Malthusianism is true and think Malthusian worlds have more people.

But, if you believe the opposite...that Malthusian worlds have more people, which I have not thought about but actually think might be true, yes a bigger gap than 10^10; will have to think about this.

Thanks! Does this make sense to you?
 

We've talked about this, but I wanted to include my two counterarguments as a comment to this post: 

  1. It seems like there's a good likelihood that we have semi-mathusian constraints nowadays. While I would admit that one should be skeptical of total malthusianism (ie for every person dying another one lives because we are at max carrying capacity), I think it is much more reasonable to think that carrying constraints actually do exist and maybe its something like for every death you get .2 lives or something. If this is true, I think this argument weakens a bunch.
  2. This argument only works if, conditional on existential risk not happening, we don't hit malthusian constraints at any point in the future, which seems quite implausible. If we don't get existential risk and the pie just keeps growing, it seems like we would just get super-abundance and the only thing holding people back would be malthusian physical constraints on creating happy people. Therefore, we just need some people to live past that time of super-abundance to have massive growth. Additionally, even if you think those people wouldn't have kids (which I find pretty implausible -- as one person's preference for children would lead to many kids given abundance), you could talk about those lives being extremely happy which holds most of the weight. This also 

Side note: this argument seems to rely on some ideas about astronomical waste that I won't discuss here (I also haven't done so much thinking on the topic), but it seems maybe worth it to frame around that debate. 

Curated and popular this week
 ·  · 4m read
 · 
Introduction Although there has been an increase over the last few years in EA work for aquatic animals, there are still significant gaps and challenges in this space. We believe there is a misconception that the existence of new organisations means that the area is 'covered'.  Our purpose in this post is to highlight the gaps and challenges in aquatic animal welfare. We argue that an ecosystem of multiple charities and approaches in the space is needed (including overlapping work on species, countries, and/or interventions). We will also explore some of the challenges that currently hinder the development of this field and offer recommendations within the 'white space' of aquatic animal welfare. Our goal is to initiate a dialogue that will lead to more robust and varied approaches. Why we need more groups working in the aquatic animal space There are not that many people working in this space Animal welfare programs have traditionally been focused on terrestrial species. However, recent years have witnessed a burgeoning interest in aquatic animal welfare within the Effective Altruism community. This could raise the question as to whether we need more charities focusing on aquatic animals, to which we want to argue that we do. Aquatic animals encompass a wide range of species from fish to crustaceans, and are subjects of increasing concern in welfare discussions. Initiatives by various organisations, including our own (Fish Welfare Initiative and Shrimp Welfare Project), have started to address their needs. However, these efforts represent only the tip of the iceberg.  The depth and breadth of aquatic animal welfare are vast, and current interventions barely scratch the surface. For example, while there is growing awareness and some actions by various charities towards the welfare of farmed fishes, the welfare needs and work on invertebrates like shrimps are still in nascent stages. Situations are vastly different between regions, species, and intervention
 ·  · 7m read
 · 
My new book, Altruismo racional, is now on presale. It is my attempt at presenting a compelling case for a particular strand of "classical EA"[1]: one that emphasizes caring deeply about global health and poverty, a rational approach to giving, the importance of cost-effectiveness, and the 🔸10% Pledge. In this post, I provide some context on my reasons for writing this book and what I hope to achieve. If “new EA-themed book in Spanish” was all you needed to know, feel free to skip to How you can help or preorder now. Why write a book Imagine you wake up one morning and discover the world has changed in a few peculiar ways. There has been no 10th anniversary edition of Peter Singer's The Life You Can Save—it was last edited more than a decade ago and has been out of print for years. Will MacAskill has not written Doing Good Better nor any of his pieces for The Guardian. And that’s not all. You ask around about EA and get mostly confused looks. Someone mentions a blog called “Codice Stellare Something” that later changed its name. You look it up but it's written in some foreign language that's hard to understand. “Toby who?” He seems to be associated with something called Geben Was Wir Können that you cannot pronounce, let alone remember. Welcome to Spain—or, I dare say, the Spanish-speaking world—where language friction[2] curbs the potential of most of the ways people first hear about EA. This is true for many other topics, of course. In Spain, people usually don't hear directly from those doing cutting-edge work in the English-speaking world, but rather from local explainers or commentators. Top non-fiction books like Sapiens or Antifragile are read, overwhelmingly, in translation. I have been close to some attempts to translate key EA-themed books into Spanish. The problem? Publishers are quite uninterested because only a handful of English-speaking public intellectuals have the global name recognition to guarantee sales. The Scout Mindset and What We Owe T
 ·  · 12m read
 · 
There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn’t have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says:  "I think my waters have broken".  "Really? It’s probably nothing, let’s just check whether that’s normal."  After a leisurely walk home, and a crash course on premature membrane rupture, we realise that, yes, her waters have definitely broken. We’re about to be among the 7–8% of parents who’ll have a premature baby. We call the hospital. They tell us to come in immediately. One slightly awkward bus journey later, and we’re at the maternity ward. No contractions yet, but the doctors tell us that they might start over the next few days. If they don’t come within the week, they’ll induce labour. They prepare a room, and ask how we want to do this, nudging towards a caesarean. We agree and I head home to prepare things for an imminent arrival. At 7am the next morning, the phone rings: she’s having the baby. With no buses running, I sprint to the hospital, take a wrong turn, and rather heroically scale a three-metre wall to avoid a detour. Bursting through the hospital wards, smelling distinctly of sweat, I find my wife there, in all green and a mesh hat, looking like a nervous child. We’re allowed to exchange an awkward “good luck” with everyone else watching. Hospita