I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses.
This post is a companion post for What posts are you thinking about writing?
When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing.
If you think someone has already written the answer to a user's question, consider lending a hand and linking it in the comments.
A few suggestions for possible answers:
- A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
- A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
If you find yourself with loads of ideas, consider writing a full "posts I would like someone to write" post.
Draft Amnesty Week
If you see a post idea here which you think you might be positioned to answer, Draft Amnesty Week (March 11-17) might be a great time to post it. In Draft Amnesty Week, your posts don't have to be fully thought through, or even fully drafted. Bullet-points and missing sections are allowed, so you can have a lower bar for posting.
Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greaves' working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:
So I sent her an email a few days ago about this.