We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
Broadly Agree
Although I might have misunderstood and missed the point of this entire debate, so correct me if that is the case
I just don't believe changing the future trajectory is tractable in say 50-100 years from now areas like politics, economics, AI welfare etc. I think its a pipe dream. We cannot predict technological, political and economic changes even in the medium term future. These changes may well quickly render our current efforts meaningless in 10-20 years. I think the effect of work we do now which is future focused diminishes in value exponentially to the point that in 20 years time it is probably almost meaningless. I could be convinced against this if you could show me interventions 20 years ago which were specifically future focus which are now still meaningful.
However I do believe it may be possible to reduce existential risk through direct, tangeble changes now which could prevent a disastrous one off event like a engnieered virus or AI apocalypse. Stopping a one off catastrophe though still exremely difficult seems much more realistic than influencing an overall global trajectory with so much uncertainty.
This is unless you include general global health and development and animal welfare work which directly improves the lives of beings today, and will also likely improve beings lives in the future but probably in a far more minor way than this question is addressing. If you include the effects ofstandard EA GHD and animal welfare work under the banner "Improving the value of futures where we survive" then I would probably swing my vote the other way.