Hide table of contents

This is something I originally wanted to send to Jack_H, but I think it makes a potentially useful post. Jack asked me if there's anything I would add to Linch's RP work trial on aging. The post below is my answer. I haven't crafted this post extremely carefully and there might be a lot of important things missing (and some grammatical errors), but I figured it would be best to post it here rather than just in a FB chat. 

The two areas Linch identified that I find most promising

I don't have in mind other large areas where to direct effort, but I have in mind a few specific points for how to direct research inside the areas he proposes:

The two areas I find most interesting are: "fundamental research on how to think about the effects of anti-aging" and "intervention research on how to do anti-aging".

Regarding the area "fundamental research on how to think about the effects of anti-aging"

To me, one of the most interesting questions to try to answer is similar to what I discussed with Heye in a call with him. Matthew Barnett summarizes it in "A mental shift among both elites and regular citizens about the best way to prepare for the future. Here, I imagine that politicians and other elites would regularly talk about the future thousands of years hence because it's reasonable that people will be around that long". 

Basically, we need social science research to answer if longer lives change what people care about. Willbradshaw also wrote a comment in which, among other points, he explains why he thinks that anti-aging needs more social science research if we want to answer longetermism-relevant questions. 

A very specific point that ought to be evaluated is this: if people will worry more about x-risk if they don't die from aging anymore, then this is a point in favor of trying to get anti-aging before technologies that pose an existential risk to humanity are developed (see Bostrom's arguments for differential tech development, etc.). Solving aging would align egoistic and altruistic instrumental values: x-risk might become the most significant risk of death even for the individual, not only for civilization. 

But this kind of reasoning has a problem that I identified under Matthew Barnett's post, although people didn't seem to be receptive to it (maybe because it's incorrect? I don't know). The point is this: real world-interventions on anti-aging research mainly speed research up or slow it down. A difference of just a few years is very unlikely to influence differential tech development. Therefore even if we get an answer about such things (e.g. if people start caring about x-risk and the long-term future once they can live for thousands of years or more), it could be useless. But I might be wrong here, I haven't thought about it for long enough.

Longtermist interventions in the anti-aging space that don't suffer from that argument might be more promising. For example: preemptively influencing policymaking in the developed world to ensure that anti-aging research brings about a good future (mainly: make the transition to a post-aging world go smoothly). So we don't get locked in some kind of bad attractor state.

Regarding the area "intervention research on how to do anti-aging"

What I think would be useful is this:

  • We already know that NIH and NIA spending on aging research is vastly inefficient, but we need a good writeup on this. Something that analyzes all of their grants as opposed to what would actually be helpful. The real field of aging research looks much smaller after we remove all the obviously misguided research.
  • We need to identify more promising underfunded labs and potentially even labs that might be shut down for stupid reasons such as feuds between academics (point suggested to me by gavintaylor years ago).
  • We need to have a clear "roadmap" of what basic research to finance if we want to finance it. I would build it like this: consider each of the hallmarks as a bottleneck until we have further info on what is most likely a bottleneck. Or if we already have a pretty good sense of what hallmarks constitute bottlenecks to putting aging under medical control, then consider only these hallmarks. In addition to hallmarks, we should include other areas that cut across them that are particularly important, evaluated separately. Every hallmark/area must be associated with neglectedness and tractability scores (scope is not relevant if we only use bottlenecks, but it is relevant for stuff that cuts across) and a complete list of groups that work on them. Lifespan.io's Rejuvenation Roadmap would help for this, but it's probably not complete. 
     

But wait, there's more!

Finally, some work that needs to be done that cuts across all this (both longtermist considerations and other considerations): various guesstimate models evaluating various kinds of interventions based on different models of impact. The interventions might be basic research, trials, advocacy, policy change, social science research, etc. 

EDIT: Another point I should write about here. This is an answer to this comment.

How would you answer the following arguments?

Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.

From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.

I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.


My answer:

1. Yes, this is probably true. But see longtermist considerations of effects of anti-aging research. They might be in the same ballpark. Or not.

2. There are three ways in which the impact of anti-aging research is evaluated: DALYs averted and other short-term considerations, LEV being brought closer in time, and effects relevant to the long-term future. All three don't suffer from this objection.

EDIT 2: One other potentially promising idea, suggested to me by Matthew Barnett in private, is to use prediction platforms such as Metaculus to try to predict clinical trials outcomes (perhaps also on aging itself or important markers, rather than necessarily on trial endpoints?). One other way to use platforms such as Metaculus is to get a sense of what that community thinks regarding the long-term effects of anti-aging, before (and after) getting social science data.

 

10

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op
Relevant opportunities