Hi everyone,
We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs).
You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.
Participants include:
- Ajeya Cotra, who leads our work on technical AI safety.
- Julian Hazell, a Program Associate in AI Governance and Policy.
- Jason Schukraft, who leads our GCR cause prioritization team.
- Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
- Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
- Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular).
They’ll be happy to answer questions about:
- The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
- The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
- Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.
This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).
To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.
Generally, we try to compensate people in such a way that compensation is neither the main reason to be at Open Phil nor the main reason to consider leaving. We rely on market data to set compensation for each role, aiming to compete with a candidate’s “reasonable alternatives” (e.g., other foundations, universities, or high-end nonprofits; not roles like finance or tech where compensation is the main driving factor in recruiting). Specifically, we default to using a salary survey of other large foundations (Croner), and currently target the 75th percentile, as well as offering modest upwards adjustments on top of the base numbers for staff in SF and DC (where we think there are positive externalities for the org from staff being able to cowork in person, but higher cost of living). I can’t speak to what they’re currently doing, but historically, GiveWell has used the same salary survey; I’d guess that the Senior Research role is benchmarked to Program Officer, which is a more senior role than we’re currently posting for in this GCR round, which explains the higher compensation. I don’t know what BMGF benchmarks you are looking at, but I’d guess you’re looking at more senior positions that typically require more experience and control higher budgets at the higher end.
That said, your point about technical AI Safety researchers at various nonprofit orgs making more than our benchmarks is something that we’ve been reflecting on internally and think does represent a relevant “reasonable alternative” for the kinds of folks that we’re aiming to hire, and so we’re planning to create a new comp ladder for technical AI Safety roles, and in the meantime have moderately increased the posted comp for the open TAIS associate and senior associate roles.