Intro/summary:
Will MacAskill, arguably the biggest proponent of longtermism, summarises the argument for it as:
1. Future people count.
2. There could be a lot of them.
3. We can make their lives go better.On the face of it, this is a convincing argument.
However, this post outlines my objections to it, summarised as:
1. Future people count, but less than present people.
2. There might not be that many future people.
3. We might not be able to help future people much.To this, I will add a fourth: there are trade-offs from this work.
Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, it's been a busy few of weeks!
In response to your points.
I suspect this depends strongly on your overall shape for the value of the future. If you have infinite exponential growth you're correct. For, in my opinion, more reasonable shapes of future value then this will probably start mattering. In any case, it damages the case for future people to some extent but I agree it is not fatal.
Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don't think we can resolve this without being quantative.
I'm pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.
You can, and sometimes (albeit rarely) these arguments are productive, but I still think any numeric estimate you end up with is pretty much just based on intuitions and heavily on priors.
Yes, we should certainly take them seriously. But "seriously" is rather imprecise to suggest how many resources we should be willing to throw at it.