It's pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem, with some saying that we could have AGI within the next 30 years or so. In The Precipice, Toby Ord estimates the existential risk from unaligned AGI is 1 in 10 over the next century. On 80,000 Hours, 'positively shaping the development of artificial intelligence' is at the top of the list of its highest priority areas.
Yet, outside of EA basically no one is worried about AI. If you talk to strangers about other potential existential risks like pandemics, nuclear war, or climate change, it makes sense to them. If you speak to a stranger about your worries of unaligned AI, they'll think you're insane (and watch too many sci-fi films).
On a quick scan of some mainstream news sites, it's hard to find much about existential risk and AI. There are bits here and there about how AI could be discriminatory, but mostly the focus is on useful things AI can do e.g. 'How rangers are using AI to help protect India's tigers'. In fact (and this is after about 5 mins of searching so not a full blown analysis) it seems that overall the sentiment is generally positive. Which is totally at odds to what you see in the EA community (I know there is acknowledgement of how AI could be really positive, but mainly the discourse is about how bad it could be). Alternatively, if you search nuclear war, pretty much every mainstream news site is talking about it. It's true we're at a slightly more risky time at the moment, but I reckon most EA's would still say the risk of unaligned AGI is higher than the risk of nuclear war, even given the current tensions.
So if it's such a big risk, why is no one talking about it?
Why is it not on the agenda of governments?
Learning about AI, I feel like I should be terrified, but when I speak to people who aren't in EA, I feel like my fears are overblown.
I genuinely want to hear people's perspectives on why it's not talked about, because without mainstream support of the idea that AI is a risk, I feel like it's going to be a lot harder to get to where we want to be.
A few random thoughts I have on this:
I've tried speaking to a few non-EA people (very few, countable on hand) and I kind of agree that they think you've watched way too much sci-fi when talking about AI safety but they don't think it's too far-fetched. A specific conversation I remember having made me realize that one reason might be that a lot of people simply think that they cannot do much about it. 'Leave it to the experts' or 'I don't know anything about AI and ML' seems to be a thought that non-EA people might have on the issue, hence preventing them from actively trying to reduce the risk, if it finds a way into their list of important problems at all. There's also the part about AI safety not being a major field leading to misconceptions like the need for a compsci PhD and a lot of technical math/CS knowledge to work in AI safety when there actually exist roles that do not require such expertise. This quite obviously prevents them from changing their career to work in AI safety, but, even more so, it discourages them to read about it at all (this might also be the reason why distillation of AI alignment work is in high demand) even though we see people read about international conflicts, nuclear risk, and climate change more frequently (I'm not sure of the difference in scale but I can personally vouch for this since I had never heard of AI alignment before joining the EA community).
I hadn't thought of the fact that people may think they have no power so just kind of...don't think about it. I suppose more work needs to be done to show that people can work on it.