As far as I am aware, radical life extension is not considered one of the focuses of Effective Altruism. 80,000 Hours Problem Profiles page doesn't mention it; the Wikipedia and AIs say it isn't. The most recent mention of life extension on this forum I found is this request for questions as a preparation for the interview to evaluate the cause Life Extension Advocacy Foundation (LEAF). No questions were asked and I couldn't find the interview; it seems that there just wasn't any interest in the topic.

But why?

About 166,000 people die in the world each day, or 60 million per year, most of them from age-related disease. If we could postpone that for 1 year, we would save tens of millions of QALYs, or raise world GDP by 1-2%, in economic terms. This should make causes focused on extending lives worthy of consideration, in my view; wouldn't it?

Then why can I see no sign that radical life extension was even considered as a field of Effective Altruism? The AI told me that radical life extension is speculative, but so is preventing risks from AI, isn't it?

Is it because the interests of the originators of EA movement lie elsewhere (and the movement followed)?

20

1
0

Reactions

1
0
Comments11
Sorted by Click to highlight new comments since:

Approximately how much money have effective altruism organizations (GiveWell, Give What You Can) given to science with the aim of curing aging?

 

Do we know that it is a fact that GiveWell and the other EA orgs are not giving alot?

Welcome to the Forum!

I think part of the explanation relates to what you point out at the end: “radical life extension is speculative”. You note that “so is preventing risks from AI”, but preventing such risks seems to be higher impact than extending life. In general, causes vary both in how “speculative” they are and in what their expected impact is, and EA may be seen as an attempt to have the most impact for varying levels of “speculativeness”. One could argue that, while the impact of life extension as a cause is relatively high, its “speculativeness-adjusted” impact is considerably lower — just like the risk-adjusted return of some financial instruments is low despite having comparatively high expected returns. This may in part explain why it is relatively neglected among EAs. (I do not think it fully explains it.)

(I have added the tag Aging research. There's a list of relevant posts at the end which you may want to consult.)

This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.

To be clear, I was only trying to describe what I believe is going on here, without necessarily endorsing the relative neglect of this cause area. And it does seem many EA folk consider radical life extension a “speculative” way of improving global health, whether or not they are justified in this belief. 

CEARCH did a shallow dive into this (https://docs.google.com/spreadsheets/d/116DqgnzADo8zAmJ_QAp9AKcjKPMlgBs2Hc9E7SSAASM/edit#gid=0) and our preliminary conclusion is that the marginal expected value of funding life extension research doesn't meet our threshold of 10x GiveWell. A lot of uncertainty, obviously, but generally things look good upfront and worse later, so this wasn't a promising sign, and we decided not to spend more time on this.

Radical life extension is IMO a big part of the rationalist worldview, if not the EA movement.  (Although recent progress in AI has taken attention away from anti-aging, on the grounds that if we get AI alignment wrong, we're all dead, and if we get alignment right, the superintelligent AI will easily solve aging for us.)

One of the problems with radical life extension as an EA cause area is that it seems like other people ought to be rationally self-interested in funding anti-aging research, so it's not clear why EA should foot the bill:

Health interventions in the world's poorest countries -- a lot of the leverage comes from the fact that poor people often don't have the resources or knowledge to help themselves

Animal welfare & longtermism -- animals obviously have about ~0 ability to advocate for themselves.  Ditto for unborn far-future generations of human civilization.

Anti-aging -- sure, there is a pretty significant collective-action problem (I might pay for anti-aging research because I personally don't want to die, but then the benefits are diffused around to all humanity), but still, wouldn't there be plenty of especially rich people willing to pay the $$$ to do the anti-aging research??  Yes, indeed, this is what we see:

Personally, I cheer these billionares on, I think they're doing a great thing, and I think people in general ought to wise up about the badness of death and aging (it would be great if millions more people had read "The Fable of the Dragon-Tyrant"!) and support more anti-aging research through government-funded health agencies.  HOWEVER, even with all that enthusiasm... I don't think it's really a great fit for an EA cause, since so many other people have a self-interested incentive to fund this stuff.

I'd also note that hundreds of billions of dollars are spent on biomedical research generally each year. While most of this isn't targeted at anti-aging specifically, there will be a fair amount of spillover that benefits anti-aging research, in terms of increased understanding of genes, proteins, cell biology etc.

My understanding of what's happening is, more or less, "~everyone who's willing to bite the bullet on anti-aging being higher EV than GH&D is wiling to bite the further bullet on X-risk reduction being higher EV than both". I could imagine some combination of views leading to thinking anti-aging is the highest impact – for instance, someone who subscribed to all of: person-affecting view, longtermism, and long AI timelines – but that's a pretty unusual combination.

"the" focus violates the spirit of EA, and feels like a weird title for the post. "a" focus is what the correct title would be. 

It's not disinterest, it's always been an open question and viable for prioritization. Either properties of particular peoples' subjective EV formulas (which incorporate each of the ITN, presumably) or social contingencies (like the patents/IP culture of biotech investing feeling weird from a more software industry perspective, or something more like "people not going to enough of the same parties", etc.) lead to it appearing to have very little EA cashflow. 

I saw this comment recently asking a similar question but 9 years ago. 

A friend had a draft of a givewell-style "QALYs bought by the marginal dollar spent in the longevity field" in guesstimate, but I don't want to put his name on blast in case he thinks the draft is really bad or he didn't get far enough. My guess is that more projects like this could meaningfully change the conversation. 

Quick thought: in some population models, in particular ones where human population peaks in the next 50-100 years and then continues declining (see Three mistakes in the moral mathematics of existential risk), there is a longtermist case to make that without life extension, the economy will not be sustainable when:

  1. Elderly people comprise a large part of society and thus require more societal help
  2. There are fewer people to fill niche sub-specialties

of course, a lot of things change with AGI as well, so it all depends on your timelines. I generally believe that EA doesn't emphasize aging enough and would like to help you make the case if you're interested.

Curated and popular this week
Relevant opportunities