Quick context: I'm a philosophy graduate aiming to transition into AI governance/policy research or AI safety advocacy. As part of this path, I'm planning to work at for-profit companies to build experience and financial stability during the transition, and I’m seeking advice on which for-profit roles can best build relevant skills.
My question is: what kinds of roles (especially outside of obvious research positions) are valuable stepping stones toward AI governance/policy research? I don’t yet have direct research experience, so I’m particularly interested in roles that are more accessible early on but still help me develop transferable skills, especially those that might not be intuitive at first glance.
My secondary interest is in AI safety advocacy. Are there particular entry-level or for-profit roles that could serve as strong preparation for future advocacy or field-building work?
A bit about me:
– I have a strong analytical and critical thinking background from my philosophy BA, including structured and clear writing experience
– I’m deeply engaged with the AI safety space: I’ve completed BlueDot’s AI Governance course, volunteered with AI Safety Türkiye, and regularly read and discuss developments in the field
– I’m curious, organized, and enjoy operations work, in addition to research and strategy
If you've navigated a similar path, have ideas about stepping-stone roles, or just want to chat, I'd be happy to chat over a call as well! Feel free to schedule a 20-min conversation here.
Thanks in advance for any pointers!
Hey everyone! As a philosophy grad transitioning into AI governance/policy research or AI safety advocacy, I'd love advice: which for-profit roles best build relevant skills while providing financial stability?
Specifically, what kinds of roles (especially outside of obvious research positions) are valuable stepping stones toward AI governance/policy research? I don’t yet have direct research experience, so I’m particularly interested in roles that are more accessible early on but still help me develop transferable skills, especially those that might not be intuitive at first glance.
My secondary interest is in AI safety advocacy. Are there particular entry-level or for-profit roles that could serve as strong preparation for future advocacy or field-building work?
A bit about me:
– I have a strong analytical and critical thinking background from my philosophy BA, including structured and clear writing experience
– I’m deeply engaged with the AI safety space: I’ve completed BlueDot’s AI Governance course, volunteered with AI Safety Türkiye, and regularly read and discuss developments in the field
– I’m curious, organized, and enjoy operations work, in addition to research and strategy
If you've navigated a similar path, have ideas about stepping-stone roles, or just want to chat, I'd be happy to chat over a call as well! Feel free to schedule a 20-min conversation here.
Thanks in advance for any pointers!