I recently posted that I'd like AI researchers to establish a consensus (>=70%) opinion on this question: What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?
So in the spirit of proactivity, I've created a short google form to collect researchers' opinions: https://docs.google.com/forms/d/e/1FAIpQLScD2NbeWT7uF70irTagPsTEzYx7q5yCOy7Qtb0RcgNjX7JZng/viewform
I'd welcome feedback on how to make this form even better, and I'd also appreciate if you'd forward it to an X-risk skeptical AI researcher in your network. Thanks!
I'd consider a few multiple-choice "demographic" questions, such as whether the respondent identifies alignment/safety as a significant focus of their work, the respondent's length of experience in a ML/AI role, etc. Not sure which questions would be most valuable, but having some would let you break some results down by subgroups.