I recently posted that I'd like AI researchers to establish a consensus (>=70%) opinion on this question: What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?
So in the spirit of proactivity, I've created a short google form to collect researchers' opinions: https://docs.google.com/forms/d/e/1FAIpQLScD2NbeWT7uF70irTagPsTEzYx7q5yCOy7Qtb0RcgNjX7JZng/viewform
I'd welcome feedback on how to make this form even better, and I'd also appreciate if you'd forward it to an X-risk skeptical AI researcher in your network. Thanks!
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that's my personal experience of AI researchers who don't care about alignment. But if my experiences don't generalize, I agree that more explanation is necessary.