Hi everyone,
We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs).
You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.
Participants include:
- Ajeya Cotra, who leads our work on technical AI safety.
- Julian Hazell, a Program Associate in AI Governance and Policy.
- Jason Schukraft, who leads our GCR cause prioritization team.
- Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
- Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
- Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular).
They’ll be happy to answer questions about:
- The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
- The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
- Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.
This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).
To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.
Thank you for doing this! Highly helpful, and transparent, we need more of this. I have many questions, mostly on a meta-level, but the part about AI safety is what I'd preferred to be answered.
About AI safety :
About the ratio of hires between AI safety and biorisks:
More diverse consideration about GCR
About cause-prioritization positions
Thank you so much for your answers!
On technical AI safety, fundamentally having more grantmaking and research capacity (junior or senior) will help us make more grants to great projects that we wouldn't have been able to otherwise; I wrote about that team's hiring needs in this separate post. In terms of AI safety more broadly (outside of just my team), I'd say there is a more severe constraint on people who can mentor junior researchers, but the field could use more strong researchers at all levels of seniority.