Possible, but likely a smaller effect than you might think because: a) I was very ambiguous about the subject matter until they were taking the survey (e.g. did not mention AGI or risk or timelines) b) Last time (for the 2016 survey) we checked the demographics of respondents against those for a random subset of non-respondents, and they weren't very different.
Participants were also mostly offered substantial payment for taking the survey ($50 usually, for a ~15m survey), in part in the hope of making payment a larger motivator than desire to express some particular view, but I don't think payment actually made a large difference to the response rate, so probably failed have the desired effect on possible response bias.
>I would be very excited to see research by Giving Green into whether their approach of recommending charities which are, by their own analysis, much less cost effective than the best options is indeed justified.
Several confusions I have:
It seems worth distinguishing 'effectiveness' in the sense of personal competence (as I guess is meant in the first case, e.g. 'reasonably sharp') and 'effectiveness' in the sense of trying to choose interventions by cost-effectiveness.
Also remember that selecting people to encourage in particular directions is a subset of selecting interventions. It may be that 'E not A' people are more likely to be helpful than 'A not E' people, but that chasing either group is less helpful than doing research on E that is helpful for whichever people already care about it. I think I have stronger feelings about E-improving interventions overall being good than about which people are more promising allies.
Note that we didn't tell them the topic that specifically.
Tried sending them $100 last year and if anything it lowered the response rate.
If you are inclined to dismiss this based on your premise "many AI researchers just don’t seem too concerned about the risks posed by AI", I'm curious where you get that view from, and why you think it is a less biased source.