Consider me a non-technical, AI-ignorant EA who has specialized in other areas. Consider that I want to do the most good, so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing. However as an EA I'm interested by impact before making any decisions. Can you link any research paper (understandable by someone who is not skilled in ML), any forum post, any book (that would be the best option!), to show me evidence that AI is indeed the most impactful thing to work on?
I listened to Davidson 80k podcast, I know about the fable of the boy and the wolf by the famous Kat, I know about the 5 percent chances of human extinction. I read the Precipice. However I still find myself reluctant to put AI as my priority despite knowing these things. As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced. Since it is considered that community epistemics are strong on this subject, I guess they should be available for people that do not have the technical background to understand the evidence.
Ps : Sorry for the many reposts, my internet is playing with me and I thought that the question wasn't sent!
I'm open to there being new evidence on funding, but I'd also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the 'Astronomical Waste' argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.
As for being a community builder, I don't have experience there, but I guess I'd make some suggestions/distinctions:
I don't think you should have to update or defer your own views in order to be a community builder at all, and I'd encourage you to hold on to that feeling of being unconvinced
Hope that helps! :)