I'm posting this in preparation for Draft Amnesty Week (Feb 24- March 2), but it's also (hopefully) valuable outside of that context. The last time I posted this question, there were some great responses.
When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing and ideas can be voted on separately.
If you see an answer here describing a post you think has already been written, please lend a hand and link it here.
A few suggestions for possible answers:
- A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
- A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
- A gap in an argument that you'd like someone to fill.
If you have loads of ideas, consider writing an entire "posts I would like someone to write" post.
Why put this up before Draft Amnesty Week?
If you see a post idea here that you think you might be positioned to answer, Draft Amnesty Week (Feb 24- March 2) might be a great time to post it. During Draft Amnesty Week, your posts don't have to be thoroughly thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. More details.
Pertinent to this idea for a post I’m stuck on:
What follows from conditionalizing the various big anthropic arguments on one another? Like, assuming you think the basic logic behind the simulation hypothesis, grabby aliens, Boltzman brains, and many worlds all works, how do these interact with one another? Does one of them “win”? Do some of them hold conditional on one another but fail conditional on others? Do ones more compatible with one another have some probabilistic dominance (like, this is true if we start by assuming it, but also might be true if these others are true)? Essentially I think this confusion is pertinent enough to my opinions on these styles of arguments in general that I’m satisfied just writing about this confusion for my post idea, but I feel unprepared to actually do the difficult, dirty work, of pulling expected conclusions about the world from this consideration, and I would love it if someone much cleverer than me tried to actually take the challenge on.