Consider me a non-technical, AI-ignorant EA who has specialized in other areas. Consider that I want to do the most good, so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing. However as an EA I'm interested by impact before making any decisions. Can you link any research paper (understandable by someone who is not skilled in ML), any forum post, any book (that would be the best option!), to show me evidence that AI is indeed the most impactful thing to work on?
I listened to Davidson 80k podcast, I know about the fable of the boy and the wolf by the famous Kat, I know about the 5 percent chances of human extinction. I read the Precipice. However I still find myself reluctant to put AI as my priority despite knowing these things. As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced. Since it is considered that community epistemics are strong on this subject, I guess they should be available for people that do not have the technical background to understand the evidence.
Ps : Sorry for the many reposts, my internet is playing with me and I thought that the question wasn't sent!
I'm going to not directly answer your question, but if you do want a suggestion I'd recommend Stuart Russell's book Human Compatible. Very readable, includes AI history as well as arguments for being concerned about risk, and Russell literally (co)-wrote the textbook on AI so has impeccable credentials.
Can I ask where you heard this from? Because the evidence we have is that this is not true in terms of funding. AI Safety has become seen as increasingly more impactful, but there's plenty of disagreement in the community about how impactful it actually is.
Don't defer!! If you've done some initial research and reading, and you're not convinced, then it's absolutely fine to not be convinced!
Given that you've said that you're a non-technical EA who wants to do the most good but isn't inspired/convinced by AI, then don't force yourself into the field of AI! What field would you like to work in, what are your unique skills and experience? Then look at if you can apply them to any number of EA cause areas rather than technical AI safety.
I'm open to there being new evidence on funding, but I'd also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the 'Astronomical Waste' argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.
As for being a community builder, I don't have experience there, but I guess I'd make some suggestions/distinctions:
- If you have a co-director for the community in question who is more AI-focused,
... (read more)