I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses.
This post is a companion post for What posts are you thinking about writing?
When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing.
If you think someone has already written the answer to a user's question, consider lending a hand and linking it in the comments.
A few suggestions for possible answers:
- A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
- A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
If you find yourself with loads of ideas, consider writing a full "posts I would like someone to write" post.
Draft Amnesty Week
If you see a post idea here which you think you might be positioned to answer, Draft Amnesty Week (March 11-17) might be a great time to post it. In Draft Amnesty Week, your posts don't have to be fully thought through, or even fully drafted. Bullet-points and missing sections are allowed, so you can have a lower bar for posting.
What is your basis for the statement that "most beings would rather continue to live instead of being painlessly killed"? This seems to me to be a huge assumption. Vinding and many others who write from a suffering-focused ethics perspective highlight that non-human animals in the wild experience a large amount of suffering, and there's even greater consensus on non-human animals bred for food experiencing a large amount of suffering; is there research suggesting that the majority of beings would actively choose to continue to live over a painless death if they had an informed choice or is this an assumption? Even just considering humans, we have millions of people in extreme poverty; and an unknown number of humans suffering daily physical and / or sexual abuse. Too often there's both a significant underestimation of the number of beings experiencing extreme suffering - and a cursory disregard for their lived experience with statements like 'oh well if it was that bad they'd kill themselves', which completely ignores that a large proportion of humans follow religions in which they believe they will go to hell for eternity/similar if they die via suicide. I would counter your selfishness statement with 'If we accept the theory that ceasing to live is a painless nothingness, and we say there is a button to kill all life painlessly, is it not selfish for those who want to continue to live to not push the button and cause the continuation of extreme suffering for other beings?'
Oisín Considine's point may well be uncomfortable for many to think about and therefore unpopular, but I think it's a sound question/point to make. And one with potentially very significant implications when it comes to s-risks. If death (or non-existence) is neutral vs suffering is negative then that might imply we should dedicate more resources to preventing extreme suffering scenarios than to preventing extinction scenarios for example.