Epistemic status: Relatively confident

Most non-longtermists believe that we should discount the utility of people in the far future. But I think they've failed to consider the implications of special relativity for this worldview.

Consider the fact that the time between two events is not something that all observers can agree upon. Because of relativistic effects like time dilation, time intervals can differ between observers moving on different trajectories. (This is something that GPS satellites must account for as they whiz past.) 

Any good theory of physics (or morality) must be Lorentz covariant, i.e. not dependent on one's orientation or velocity. The physicists' way of defining time intervals consistently is by using proper time:

In English, the proper time interval () between two events depends both on the classical time interval () and the time it would take light to cover the spatial separation between the events (). If the start and end of the interval are at the same location, then proper time equals the time a stationary observer at that location would measure; otherwise, it is smaller. Unlike the time measured by your watch, proper time is a Lorentz invariant that all observers agree upon.

Previous work in a classical Newtonian setting (Alexander 2013) concluded that moral weight depends on the inverse square of the distance from an observer. I will show that when accounting for special relativity, discounting the far future implies that we must actually care more about the welfare of those distant from us in space.

The minus sign in the formula for proper time means that a discount factor for events distant in time can be cancelled out if those events are also distant in space. The effect is small for distances on earth, which is much less than one light-second across. But the 440 light-year distance of Polaris means that we should care about events taking place there in the year 2464 just as much as we do events on Earth today, even if we heavily discount what will happen on Earth hundreds of years from now.

This implies that if one rejects the longtermist idea that "future people matter morally just as much as people alive today", then the large majority of moral weight is located not in the future here on Earth but in the far reaches of space. In particular, any sentient aliens on the boundary of our future light cone deserve the same moral consideration as any human alive today.

At this point, you have three options:

  1. Reject Lorentz invariance, angering any nearby physicists and asserting that morality depends not only on where and when you are but how fast you're going,
  2. Reject longtermism and accept that our chief civilizational priority should be to send a fleet of starships out at near-light-speed to rescue any drowning aliens, or
  3. Become a longtermist and believe that the goodness of pleasure and the badness of suffering matters the same, whatever the spacetime coordinates.

I await your decision, and I'll see you on Polaris.

7

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!

But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!

I'm informed @Erik Jenner beat me to this idea! Check out his version as well.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under