Hi everyone,
We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs).
You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.
Participants include:
- Ajeya Cotra, who leads our work on technical AI safety.
- Julian Hazell, a Program Associate in AI Governance and Policy.
- Jason Schukraft, who leads our GCR cause prioritization team.
- Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
- Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
- Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular).
They’ll be happy to answer questions about:
- The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
- The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
- Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.
This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).
To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to "why wouldn't someone want to work at OP?"
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As I've gotten more involved in AI policy, I've updated more strongly toward this position. While simple statements always involve a bit of gloss/imprecision, I think characterizations like "OpenPhil has taken a bet on the scaling labs", "OpenPhil is concerned about disrupting relationships with labs", and even "OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo" are fairly accurate.
The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo.
This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the "be nice with labs//don't disrupt the status quo" role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS).
Lots of people want to work there; replaceability
Given OP's high status, lots of folks want to work there. Some people think the difference between the "best applicant" and the "2nd best applicant" is often pretty large, but this certainly doesn't seem true in all cases.
I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/entrepreneurship, there might be a strong case for them to do the latter, since it's much less likely to happen by default.
What does the world need?
I think this is somewhat related to the first point, but I'll flesh it out in a different way.
Some people think that we need more "rowing"– like, OP's impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, we're pushing the world into a considerably better direction.
Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation.
Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful.
Of course, part of this depends on how open OP will be to people "steering" from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and there's a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong).