I'm posting this in preparation for Draft Amnesty Week (Feb 24- March 2), but it's also (hopefully) valuable outside of that context. The last time I posted this question, there were some great responses.
When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing and ideas can be voted on separately.
If you see an answer here describing a post you think has already been written, please lend a hand and link it here.
A few suggestions for possible answers:
- A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
- A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
- A gap in an argument that you'd like someone to fill.
If you have loads of ideas, consider writing an entire "posts I would like someone to write" post.
Why put this up before Draft Amnesty Week?
If you see a post idea here that you think you might be positioned to answer, Draft Amnesty Week (Feb 24- March 2) might be a great time to post it. During Draft Amnesty Week, your posts don't have to be thoroughly thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. More details.
I would be really interested in a post that outlined 1-3 different scenarios for post-AGI x-risk based on increasingly strict assumptions. So the first one would assume that misaligned superintelligent AI would almost instantly emerge from AGI, and describe the x-risks associated with that. Then the assumptions become stricter and stricter, like AGI would only be able to improve itself slowly, we would be able to align it to our goals etc.
I think this could be a valuable post to link people to, as a lot of debates around whether AI poses an x-risk seem to fall on accepting or rejecting potential scenarios, but it's usually unproductive because everyone has different assumptions about what AI will be capable of.
So with this post, to say that AI x-risk is not tangible, then for each AI development scenario (with increasingly strict assumptions), you have to either:
If you can't do either of those, you accept AI is an x-risk. If you can, you move on to the next scenario with stricter assumptions. Eventually you find the assumptions you agree with, and have to reject all proposed x-risks in that scenario to say that AI x-risk isn't real.
The post might also help with planning for different scenarios if it's more detailed than I'm anticipating.