JulianHazell

Senior Program Associate @ Coefficient Giving
2250 karmaJoined Working (0-5 years)

Bio

Working on AI governance and policy at Coefficient Giving.

Hater of factory farms, enjoyer of effective charities.

Comments
66

Thanks for the comment!

As someone not working in the AI safety space, I'm intrigued by your opinions as to what extent grant making within AI safety is similar to and different from grant making within other cause priority areas, for example animal advocacy and global development and health?

It's hard for me to say about what these differences look like outside of CG, but one thing that comes to mind is that GHW and animal welfare grantmaking is more based on quantitative modelling and BOTECs (though sometimes we use BOTECs on the GCR side of things).

My sense from reading the post is that those areas may be relatively less neglected, with fewer opportunities for identifying opportunities with outsized impact returns on investment. Do you think that is a reasonable assumption to be making? 

It depends on how you define "neglected". Like, in terms of EA focus and talent, they're probably more neglected than AI safety/catastrophic risks. In terms of total $ spent by society at large, GHW is far less neglected than AI safety, which is in turn far less neglected than FAW.

This is kind of a lame answer, but whether AI safety has more or fewer outsized ROI opportunities really depends on your worldview. IMO, both spaces have a ton of opportunity. If I woke up tomorrow and decided that AI safety was no longer important (or I didn't buy that worldview anymore), I'd be extremely excited about the vast number of opportunities to make global health and farmed animal welfare better.

Currently it seems like each grantmaker is (on average) responsible for ~$10m/y. One question I think about sometimes: how will # of grantmakers scale as more $ go towards AI safety funding? If funding is eg 3x'ing year-over-year, it's unclear whether we're currently training up that number of grantmakers.

My vibes-based sense is that at least currently, the amount of philanthropic capital that could go towards AI safety projects is growing quite a bit faster than the number of grantmakers. I'm pretty worried about this.

Taking a look at CG:

  • Per this, the number of GCR program staff at CG only grew 2x from 2019 to 2022.
  • Quickly looking at archives of the team list from EOTY 2022 and EOTY 2025, it looks like the growth rate of program staff over that period was roughly 2 to 2.5x.

Another question might be: what is a good ratio of # of grantmakers to # of direct work? I'd ballpark there to be ~1000 fulltime AIS direct workers; does a 20:1 ratio seem high, low, or just right?

I think the ratio framing is a bit tricky and depends on a lot of other variables (for instance, how mature the field is, how many promising ideas are floating around the memesphere, how good AIs are at doing direct work, how much philanthropic capital there is, etc). The other thing is that the number of direct workers is itself downstream of grantmaker capacity.

Thanks for your work on this, Will and Max! Suffice to say this is pretty cool.

However, I'm a bit disheartened that you pushed the frontier without formally releasing an RSP ("Responsible-ish Scaling Policy").

So let me make this unambiguous: Do you commit to pausing the scaling and/or delaying the deployment of new automated macrostrategy researchers whenever your scaling ability outstrips your ability to comply with the safety procedures for the corresponding PSL (Philosophical Safety Level)?

Ajeya Cotra writes:

I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.

While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come by in AI, especially outside of technical safety).

If you're a generalist working on AI because it's the most important thing, I'd seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:

  • You find yourself unsatisfied by how murky the theories of change are in AI world and how hard it is to feel good about whether your work is actually important and net positive
  • You have a hard sciences or engineering background, especially mechanical engineering, materials science, physics, etc (or of course a background in biology, though that's less necessary/relevant than you may assume!)
  • You want a vibe of solving technical problems with strong feedback loops rather than a vibe of doing communications and politics, but you're not a good fit for ML research

To be clear, bio is definitely not my lane and I don't have super deep thinking on this topic beyond what I'm sharing in this quick take (and I'm partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.

Like Ajeya, I haven't thought about this a ton. But I do feel quite confident in recommending that generalist EAs — especially the "get shit done" kind —  at least strongly consider working on biosecurity if they're looking for their next thing.

People who participate in talent development programs can go on to work in a variety of roles outside of the government and AI companies.

I had the enormous privilege of working at Giving What We Can back in 2021, which was one of my first introductions to the EA community. Needless to say, this experience was formative for my personal journey with effective altruism. I consider Luke an integral part of this.

I can honestly say that I've worked with some incredible and brilliant people during my short career, but Luke has really stood out to me as someone who embodies virtue, grace, kindness, compassion, selflessness, and a relentless drive to have a large positive impact on the world.

Luke: thank you for everything you've done for both GWWC and the world, and for the incredible impact that I'm confident you will continue to have in the future. I'm sad to imagine a GWWC without you at the helm, but I'm excited to see the great things you'll end up doing down the line after you've had some very well deserved time with your family.

“Pursuing an active campaign” is kind of a weird way to frame someone writing a few tweets and comments about their opinion on something

Hi Péter, thanks for your comment.

Unfortunately, as you've alluded to, technical AI governance talent pipelines are still quite nascent. I'm working on improving this. But in the meantime, I'd recommend:

  • Speaking with 80,000 Hours (can be useful for connecting you with possible mentors/opportunities)
  • Regularly browsing the 80,000 Hours job board and applying to the few technical AI governance roles that occasionally pop up on it
  • Reading 80,000 Hours' career guide on AI hardware (particularly the bit about how to enter the field) and their write up on policy skills

Hey Jeff, thanks for writing this!

I'm wondering if you'd be willing to opine on what the biggest blockers are for mid-career people who are considering switching to more impactful career paths — particularly those who are not doing things like earning to give, or working on EA causes?

Load more