Hi everyone,
We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs).
You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.
Participants include:
- Ajeya Cotra, who leads our work on technical AI safety.
- Julian Hazell, a Program Associate in AI Governance and Policy.
- Jason Schukraft, who leads our GCR cause prioritization team.
- Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
- Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
- Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular).
They’ll be happy to answer questions about:
- The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
- The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
- Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.
This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).
To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.
(I began working for OP on the AI governance team in June. I'm commenting in a personal capacity based on my own observations; other team members may disagree with me.)
FWIW I really don’t think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I don't think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and we’ll share if we think some action might be counterproductive, but many things we’d consider “productive” look very different from “preserving the status quo.” For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.
I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people we've hired would generally be hard to replace — these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!
I basically disagree with this. There are areas where senior staff have strong takes, but they'll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)