Thanks for facilitating this thread! The answers have been illuminating and a great source of resources.
I'm looking to get into AI safety, particularly strategy for reducing existing risk from AI.
I'm a math tutor and freelance software developer with a bachelor's in Software Development and math coursework through multivariate calc and linear algebra. I'm reasonable we'll versed in politics and history and good at strategy games. I write flash fiction on the side.
Which sort roles work on strategy for reducing existing risk from AI directly? Indirectly? What's a good way to get guidance about how to proceed?
Thanks so much! 😊
Thanks for the reply Moneer, these are great ideas!
For more context, I live in the North East USA. Software-development-wise I'm at the junior dev level, so I'm early career.
AI Policy Research seems like a good idea to explore. Maybe these questions are answered in some of the reading you suggested which I haven't had the chance to check out yet. Does that field need more people? In other words, is it more of a zero sum field where getting in means someone else doesn't or is it more like I will I just add to the field? What sort of education is recommended: level? Subjects? Do you have any book recommendations for this field?
I'd love to attend a conference or summit (applied to EAG NYC but did not get in) but money is always an issue.