Epistemic status: Relatively confident
Most non-longtermists believe that we should discount the utility of people in the far future. But I think they've failed to consider the implications of special relativity for this worldview.
Consider the fact that the time between two events is not something that all observers can agree upon. Because of relativistic effects like time dilation, time intervals can differ between observers moving on different trajectories. (This is something that GPS satellites must account for as they whiz past.)
Any good theory of physics (or morality) must be Lorentz covariant, i.e. not dependent on one's orientation or velocity. The physicists' way of defining time intervals consistently is by using proper time:
In English, the proper time interval () between two events depends both on the classical time interval () and the time it would take light to cover the spatial separation between the events (). If the start and end of the interval are at the same location, then proper time equals the time a stationary observer at that location would measure; otherwise, it is smaller. Unlike the time measured by your watch, proper time is a Lorentz invariant that all observers agree upon.
Previous work in a classical Newtonian setting (Alexander 2013) concluded that moral weight depends on the inverse square of the distance from an observer. I will show that when accounting for special relativity, discounting the far future implies that we must actually care more about the welfare of those distant from us in space.
The minus sign in the formula for proper time means that a discount factor for events distant in time can be cancelled out if those events are also distant in space. The effect is small for distances on earth, which is much less than one light-second across. But the 440 light-year distance of Polaris means that we should care about events taking place there in the year 2464 just as much as we do events on Earth today, even if we heavily discount what will happen on Earth hundreds of years from now.
This implies that if one rejects the longtermist idea that "future people matter morally just as much as people alive today", then the large majority of moral weight is located not in the future here on Earth but in the far reaches of space. In particular, any sentient aliens on the boundary of our future light cone deserve the same moral consideration as any human alive today.
At this point, you have three options:
- Reject Lorentz invariance, angering any nearby physicists and asserting that morality depends not only on where and when you are but how fast you're going,
- Reject longtermism and accept that our chief civilizational priority should be to send a fleet of starships out at near-light-speed to rescue any drowning aliens, or
- Become a longtermist and believe that the goodness of pleasure and the badness of suffering matters the same, whatever the spacetime coordinates.
I await your decision, and I'll see you on Polaris.
That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!
But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!
I'm informed @Erik Jenner beat me to this idea! Check out his version as well.