Hi everyone,
We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs).
You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.
Participants include:
- Ajeya Cotra, who leads our work on technical AI safety.
- Julian Hazell, a Program Associate in AI Governance and Policy.
- Jason Schukraft, who leads our GCR cause prioritization team.
- Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
- Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
- Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular).
They’ll be happy to answer questions about:
- The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
- The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
- Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.
This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).
To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
Is it way easier for researchers to do AI safety research within AI scaling labs (due to: more capable/diverse AI models, easier access to them (i.e. no rate limits/usage caps), better infra for running experiments, maybe some network effects from the other researchers at those labs, not having to deal with all the logistical hassle that comes from being a professor/independent researcher)?
Does this imply that the research ecosystem OP is funding (which is ~all external to these labs) isn't that important/cutting-edge for AI safety?
I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:
- Third party researchers can influence the research that happens at labs through the normal diffusion process by which all research influences all other research. There's definitely some barrier to research insight diffusing from academia to companies (and e.g. it's unfortunately common for an academic project to have no impact on company prac
... (read more)