Consider me a non-technical, AI-ignorant EA who has specialized in other areas. Consider that I want to do the most good, so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing. However as an EA I'm interested by impact before making any decisions. Can you link any research paper (understandable by someone who is not skilled in ML), any forum post, any book (that would be the best option!), to show me evidence that AI is indeed the most impactful thing to work on?
I listened to Davidson 80k podcast, I know about the fable of the boy and the wolf by the famous Kat, I know about the 5 percent chances of human extinction. I read the Precipice. However I still find myself reluctant to put AI as my priority despite knowing these things. As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced. Since it is considered that community epistemics are strong on this subject, I guess they should be available for people that do not have the technical background to understand the evidence.
Ps : Sorry for the many reposts, my internet is playing with me and I thought that the question wasn't sent!
One way out is to simply not put AI as your own, personal, priority (vs say "the wider EA community's priority", a separate question altogether). 80,000 Hours' problem profiles page for instance explicitly says that their list of most pressing world problems, where AI risk features at the top, is
which is already an untrue assumption, as they clarify in their problem framework:
Given the ostensible reluctance in your post, I'm not sure that you yourself should make AI safety work your top priority (although you can still e.g. donate to the Long-Term Future Fund, one of GWWC's top recommendations in this area, and read Holden's writing and discuss it with others, and so on, none of which require such drastic re-prioritization).
Also, since other commenters / answerers will likely supply materials in support of prioritizing AI safety, for the sake of good epistemics I think it's worth signal-boosting a good critique of it, so consider checking out Nuno Sempere's My highly personal skepticism braindump on existential risk from artificial intelligence.