YouGov America released a survey of 20,810 American adults. Highlights below. Note that I didn't run any statistical tests, so any claims of group differences are just "eyeballed."
- 46% say that they are "very concerned" or "somewhat concerned" about the possibility that AI will cause the end of the human race on Earth (with 23% "not very concerned, 17% not concerned at all, and 13% not sure).
- There do not seem to be meaningful differences by region, gender, or political party.
- Younger people seem more concerned than older people.
- Black individuals appear to be somewhat more concerned than people who identified as White, Hispanic, or Other.
Furthermore, 69% of Americans appear to support a six-month pause in "some kinds of AI development". Note that there doesn't seem to be a clear effect of age or race for this question. (Particularly if you lump "strongly support" and "somewhat support" into the same bucket). Note also that the question mentions that 1000 tech leaders signed an open letter calling for a pause and cites their concern over "profound risks to society and humanity", which may have influenced participants' responses.
In my quick skim, I haven't been able to find details about the survey's methodology (see here for info about YouGov's general methodology) or the credibility of YouGov (EDIT: Several people I trust have told me that YouGov is credible, well-respected, and widely quoted for US polls).
See also:
Covid has become less dangerous, but the public concern about it has dropped off at what seems to me unreasonably steeply. So all I think I can learn from that is that public concern will not increase proportionally over the course of years, though it might still increase and it may spike at times.
I’m surprised, though, that AI progress has make stops around the human level. GPT-3 seemed a bit less smart than the average human at what it’s doing; GPT-4 seems a bit smarter than the average human. But there has not been this discontinuity where it completely skips the human level. That seems weird to me. Maybe it’s because of the human training data or the AI is trying to imitate human-level intelligence, but perhaps there’s also an actual soft ceiling somewhere in this area so that progress will stall for long enough for us to collectively react.