You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.)
I work as a grantmaker on OP's AI governance team, and inspired by Lizka's recent excellent post giving her personal take on working at Forethought, I wanted to share some personal takes on reasons for and against working on the AI teams at Open Philanthropy.
A few things to keep in mind as you read this:
- I'm mostly going to talk about personal fit and the day-to-day experience of working at OP, rather than getting into high-level strategy and possible disagreements people might have with it.
- On strategy: if you have some substantive big picture disagreements with OP's approach, I think that firstly, you're in good company (this describes many OP staff!), and secondly, you'd still enjoy working here. But if you disagree with many of our strategic choices, or have especially foundational or basic disagreements, you'd probably be kind of miserable working here.
- Everything below is just my personal take. I ran a draft past some colleagues and got reactions, but I expect they’d disagree with plenty of what I've written here.
The case for working at Open Philanthropy's AI team
So why OP’s AI team? A couple of reasons:
Impact
Open Philanthropy is the biggest philanthropic funder in the AI safety space, and has been for a long time. We have the benefits of:
- being able to influence a large amount of funding
- having outstanding access to, and coordination abilities across, the AI safety ecosystem
- a very well-oiled grantmaking machine, with significant flexibility over how and to whom we can make grants
- being trusted to spend money ambitiously and at scale, if we think it's high impact to do so
Now is an especially exciting time to work on AI safety and governance
I think now is a particularly interesting and exciting time to be doing AI safety work: we’ve seen some promising updates from things like the EU AI Act and California's SB-53, and we're also finally at the stage where we're getting useful empirical evidence on how concerned we should be about AI capabilities and propensities. I broadly agree with Holden's take that there's now increasingly more well-scoped, useful projects that people concerned about AI safety can work on.
We're currently capacity constrained
For grantmaking specifically, I feel like there's a ton of opportunities lying around that we just don't have the capacity to chase down— we just don’t have enough time to do enough talking to people, updating our views, pitching founders, advising orgs, and scaling up our grantmaking to capture all the promising low-hanging fruit.
My colleagues are very cool and very nice
I didn't necessarily expect this—being nice and being smart often aren't that correlated—but I've been surprised by how warm and welcoming OP feels. As well as just being kick-ass and competent, my colleagues across the AI team and across OP as a whole are just genuinely really lovely people! I think it's got a great culture as an organisation, I really enjoy going on team retreats or organisation-wide co-working weeks, and it’s a real privilege to get to work with people who are both very competent and generous with their time, and just fun to hang out with.
Internal disagreement and developing your own views
I think sometimes people model OP as a monolith in terms of AI takes, timelines, or strategy, and that's not the case. There's definitely broad consensus on some things, but it's a good place to develop your own inside views and prioritization. I've found that people's views have a real chance of mattering—people have been interested in my takes and willing to update on them even when I was very junior.
I've also felt pushed to improve here: my manager has given me lots of feedback on not deferring too much, being careful to flag where I am, and a lot of prompting on giving my own take before anchoring too much on his, or other people's. I think it's helped me improve a lot on this dimension, and is maybe a useful example of how I think OP as a whole approaches internal disagreement and getting input from staff members.
If you're interested in forming and refining your own big-picture views and also having a chance to shape the strategy of a major funder, I think OP is a great place to work.
Support for professional development
I often feel sceptical about organisations saying things like "we support for professional development" on job adverts – It feels easy to pay lip service to it in a fake way. So far at OP, though, I have felt pretty empowered to improve professionally: I've had the opportunity to go to conferences, get coaching, and take on more responsibility in new areas.
Sometimes balancing this alongside work can be difficult, and I'll talk a bit more about this later, but the overall impression I have is that OP as a whole cares about its staff's development.
Culture
I find it hard to put this into words precisely, but I think there's some culture of ambition, being clear-eyed and scope-sensitive, owning your responsibilities, and being happy to take bets that I've liked a lot at OP. Within my first year, I got to ideate and write our evals RFP; some of my colleagues at similar levels in their career have independently owned multiple complicated >$10M grants, or argued for and then developed strategies for new subareas we’re likely to move into, or worked on important projects informing the AI teams’ strategies. (And, correspondingly, I have no idea what mistakes they've made, but I’ve made some really classic blunders that I'm happy to reveal to any new hires we make. I don’t think it’s hurt my standings at OP or anything.)
I think you can contrast this with a culture of tallying up mistakes, or being a bit risk-averse, or doing things the way they're always been done. OP has some elements of this, of course, and I expect there'll be more bureaucracy and stagnancy by default, as a force to be resisted, as we grow as an org. But my overall sense is that OP is resisting this and trying to maintain its own distinctive culture, and I’ve found that culture very good to work within.
Some downsides/possible reasons not to apply
General considerations about grantmaking (and grantmaking-enabling roles) vs direct work
(H/t Jake Mendel for this frame)
One framing for thinking about careers in GCR work is thinking more abstractly about AI safety/governance work as broad fields, with varying amounts of resources allocated to different approaches, subproblems, or angles of attack. One way to think about grantmaking (or grantmaking-enabling) roles vs direct work roles is the effect each of these roles has on the broader landscape of AIS/AIG work.
I think the main advantage of grantmaking is that you get to influence the shape of the entire field much more broadly, by allocating significant funding, advising key players, identifying neglected areas, and coordinating across many different projects and organizations. The corresponding downside is that you have much less control over the details of how things actually get done, because you're betting on others to execute rather than doing the work yourself.
With direct work – whether that's research, operations, or something else – the pros and cons are flipped: you can choose exactly what you work on and how you do it, giving you much more control over the quality and direction of the work. (Though obviously if you're more junior, you may have less say over exactly what you focus on.) The downside is that you're limited by your own time and effort – fundamentally, you're adding some amount of work to what you think is the most important and neglected problem, rather than shaping the overall landscape of resource allocation more broadly.
I think that if you are extremely opinionated about what one problem is the most important thing to work on, and you think the details matter so much that doing it yourself is much better than finding someone else to do it, and you have reason to think you'd be extremely good at working on it, then you should probably just go and do that direct work. However, to the extent these conditions don’t apply to you – if you're more excited about a range of approaches, or you think you could probably find people who could do important direct work within an order of magnitude or two as well as you could, you should at least consider grantmaking as a potentially higher impact option.
(I think the way I've described these conditions for direct work being more impactful might give the impression that I think nobody fits this description. That’s not the case: there are some people for whom I think direct work is a more impactful choice than grantmaking or grantmaking-enabling roles. But I think people tend to arrive at this conclusion for much worse reasons, e.g. because they haven't really tried out grantmaking-like roles before and think that they would dislike them, or because they think OP would be able to hire a similarly good grantmaker to them if they didn't apply. I think this reasoning is usually not a helpful way to think about grantmaking vs direct work, and I much prefer the framing above.)
If you're very much in "research/figuring things out mode"
I often think of AI/GCR work as being on a spectrum, where one end is thinking really hard about interesting, important questions without necessarily having an eye to practicality, and the other end is very implement-y, in getting stuff done/executing mode. (This isn’t quite the same as explore/exploit, but it feels kind of similar, if that's a more helpful framing.)
In general, OP is somewhere in the middle, with lots of variation between roles and teams. I think working at OP will be a bad fit if you're very much in “research mode”, i.e. wanting to spend almost all of your time figuring things out, following your intellectual interests, forming fully fleshed-out inside views, and not worrying too much about practical applications.
Working at OP does, unsurprisingly, involve having your own views, and thinking about them and changing your mind. But ultimately, our aim is to make high-impact work happen, and our comparative advantage (imo) is in grantmaking, coordination, and starting new projects. So in practice, my experience of OP has been:
- Some amount of forming my own views, mostly via having conversations and arguments with experts, synthesising their views, and coming to some overall conclusion.
- A much greater emphasis on putting these views into practice: so, getting money out the door, writing RFPs, finding promising new grantees, having back-and-forth with applicants about their proposals, pitching people on new projects, and so on.
Risk of your takes atrophying
As well as not involving tons of research/”figuring things out” time, I think working at OP might end up causing your takes to stagnate – you'll probably spend less of your time keeping up with the latest research than you would if you were working as a researcher, and because there's so much we could be doing, it can be difficult to carve out time to keep your takes as sharp as you'd like.
While OP leadership is aware of this and wants to empower grantmakers to avoid their takes atrophying, I think it's still an occupational hazard. This is a trade for impact that you should decide whether you’re happy to make.
Social dynamics and funding relationships
Having indirect access to grant money can complicate social relationships. I think most of the worst cases here can be effectively prevented if you prepare, and OP has policies and advice in place for this, but I personally found the transition to a grantmaking role jarring in part because of this.
(I'm not sure yet how concrete I want to be about this, but the kind of thing I have in mind is: thinking you're having a social walk or conversation when actually someone is trying to pitch you; being very aware that people suddenly now have lots of weird incentives related to you/your work/your impression of their work, or at least perceive themselves to have these incentives; being unsure whether you're in work mode or social mode to an even greater degree; knowing that people now have more interest in, or feel more empowered to have takes on, how you're doing, how good your takes are, how you do at your job; having people potentially hold you responsible for any mistakes of commission or omission OP makes to a degree which might not match your actual responsibility or ability to change things; etc.)
I think it's worth knowing this in advance and preparing for it should you take a role at OP.
Feedback loops and uncertainty
The feedback loops on OP’s AI work are fairly long and uncertain. You’ll get feedback from your team, your manager, and the broader AIS community, and you can look at how your grantees are doing, so it's not like you're operating in complete darkness, but I think there’s still some sense of: we're very uncertain if we're getting things right, even our best bets rely on long, multi-step theories of change, the timelines for knowing if anything pays off are very long, and all our actions are pretty indirect.
I think this is just a feature of AI safety and governance work, but it's something that you'll have to figure out your own comfort with. (And in my head, it feels kind of similar to “getting off the crazy train”-type worries; here there’s lots of ambiguity and uncertainty, but also plausibly, at least according to me, really outstanding opportunities for impact; and if you won’t tolerate this ambiguity, then you’ll miss these opportunities.)
Slowness
Relative to other foundations of a similar size, I think OP moves fast; relative to startups, other AIS founders, and smaller organisations (i.e., almost all other AIS organisations), I think OP moves slowly.
Capacity constraints and difficulty switching off
I think in general the AI teams don’t have enough capacity to do everything we’d like to. There are some pretty obvious upsides to this – you can get a lot of responsibility, and get to do exciting and counterfactual work. But there are also potential downsides depending on how you orient to work, and I think it can be easy to find it hard to switch off, because you could always be doing more – there's always another potential high-impact grant to investigate, another potential founder to talk to, or another area to form better takes on.
I don’t feel undue pressure to work long hours (I personally work approximately normal hours, and I take a normal amount of holiday etc), and my impression is that OP is aware of this and working on mitigating it, though with differences between teams and at different seniority levels. But that being said, if you're the kind of person who struggles with boundaries in these kinds of situations, you should probably think about how you’d handle this.
Imposter syndrome?
Working at OP might make you feel kind of imposter-y, for at least three different reasons:
- First, people at Open Phil are, I think, very cool. In general, this is a really great thing, but you might feel imposter syndrome if you end up working amongst people who have had more time to develop their takes, or are more obviously excelling in their job, or are otherwise just very impressive and inspiring.
- Second, you're in frequent and close contact with extremely impressive people beyond just your colleagues—grantees, advisors, other key players in the AI safety ecosystem. I think it can be easy to feel pretty small and stupid when you're constantly surrounded by very impressive people doing very impactful and interesting work..
- Third, OP primarily makes things happen in the world via grantmaking and advising. This means much of the work is enabling others' impact, rather than doing the object-level work yourself. I think that there are better and worse ways to orient to this, and some of the worst ways can be psychologically tricky to deal with.
If you're very vulnerable to imposter syndrome, it's worth thinking about how you'd handle this (and a sufficient answer might just be, "I would discuss it with my manager regularly.") I still think you should apply if you're a good fit, but it's something to be aware of going in.
Career progression?
Being a grant maker is a weird job. I think many of the skills you pick up are more transferable than they may initially seem: you get to see many organisations succeed and fail in non-obvious ways and think about why; you form takes on both high-level strategy and how to actually implement things; you learn to communicate well and get to argue with some key figures in the AI safety ecosystem; and you learn a bunch more prosaic, generally useful things, like thinking clearly and communicating well, being someone your colleagues trust to get things done, and being able to prioritise well and execute efficiently.
But still, it's a weird job. And I think that AI safety/governance grantmaking is a sufficiently new kind of role that it's not obvious what career progression looks like, or what your outside options are if you spend a couple years here and decide to move on. My understanding is that OP leadership is aware of this and working on clarifying it, but when compared to many research/policy/technical roles, there's a less well-defined career progression path other than ascending the ranks at OP.
Mostly remote work (more specific to AIGP)
(Several of the GCR teams hiring are almost entirely based in one location, like the Bay, so this section doesn't apply to many of the roles.)
The team I work on, AIGP, is hiring. I really like working on it, and the fact that it's so friendly to working from anywhere is a big plus for me. But if you're the kind of person who needs to be in an office full of people from your team and your organization to thrive, the AIGP role probably won’t be an ideal fit: you'd most likely work from London, and while there are several team members in London, regular OP co-working days, and opportunities to work with other related organisations, there is not currently a London OP office.
So should you apply?
Obviously I'm biased - I think working at OP is great, and I'm excited about the potential of significantly increasing our impact via making great hires. But trying to be more balanced, I think you should strongly consider applying if:
- You want to influence how a lot of money ($500M+/year in GCR spending) gets allocated
- You're comfortable with significant responsibility and ambiguous feedback loops
- You can deal with being capacity-constrained and having to make hard prioritization calls
- You're able to navigate weird social dynamics around funding relationships
- You're good at forming views from talking to experts, even without extensive independent research time
- You want to work somewhere with smart, kind colleagues and a culture that values ownership, calibration, transparency, and a focus on impact
Conversely, you should probably not apply if:
- You want to spend most of your time thinking hard about research questions rather than executing on grants
- You strongly prefer environments with short, well-grounded feedback loops
- You need clear boundaries between work and social life, and would find money-related social dynamics very stressful
- You work best in a fast-moving startup environment and would find a slower pace frustrating
- You're very vulnerable to imposter syndrome and don't have good strategies for managing it
- You have good reason to think you’ll be much more impactful doing direct work
In general, I'm in favour of Michael Aird's advice of "don’t think, just apply". (And, fun fact: I applied for my current job with a similar mindset, thinking it was really unlikely I would get anywhere in the interview process, and mostly just being interested in what the work tests were like and whether I'd enjoy them. And then I got hired!). But hopefully this post is kinda helpful in thinking about some of the pros and cons.
Thanks to Alex Lawsen and Jake Mendel for feedback.

Wait, how much is it? https://www.openphilanthropy.org/grants/page/4/?q&focus-area%5B0%5D=global-catastrophic-risks&yr%5B0%5D=2025&sort=high-to-low&view-list=true lists $240M in 2025 so far.
I pulled the 500M figure from the job posting, and it includes grants we expect to make before the end of the year— I think it’s a more accurate estimate of our spending. Also, like this page says, we don’t publish all our grants (and when we do publish, there’s a delay between making the grant and publishing the page, so the website is a little behind).
Very useful post!
I'm curious what this slowness feels like as a grantmaker. I guess you progress one grant at speed and then it goes off for review and you work on other stuff, and then ages later your first grant comes back from review, and then maybe there are a few rounds of this? Or is it more spending more time on each thing than you might prefer? I'm also curious if this is a negative or not for your experience (maybe slow = having time to really think about each thing rather than rushing?).
I'm also curious if you think OP could move faster or if this is optimal?
(These are just idle curiosities, I'm not wanting to apply but find it very interesting to hear more about grantmaking at OP, thanks!)
Thanks!
Yeah, so I think the best way to think of the slowness is that there are are bottlenecks to grants getting made: things need to get signed off on by senior decision-makers, and they're very capacity-constrained (hence, in part, hiring for more senior generalists), so it might take a while for people to get to any particular grant decision you want them to get to. Also, as a more junior grantmaker, you're incentivized to make it as easy as possible for these senior decisionmakers to engage with your thoughts and not need follow-up information from you, which pushes towards you spending more time on grant investigations.
In terms of the options you listed, I think it's closest to "spending more time on each thing than you might prefer".
(All this being said, I do think leadership is aware of this and working on ways we can move faster, especially for low-risk grants. Recently, we've been able to make low-risk technical grants much faster and with less time invested, which I think has been an exciting development!)
Executive summary: A grantmaker on Open Philanthropy’s AI governance team gives a candid personal overview of what it’s like to work on Open Phil’s AI teams—arguing that the roles offer unusually high impact, autonomy, and talented colleagues, but also involve ambiguity, indirect impact, and challenges with feedback loops, work-life boundaries, and career progression.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.