JulianHazell

Program Associate @ Open Philanthropy
2189 karmaJoined Working (0-5 years)

Bio

Working on AI governance and policy at Open Philanthropy.

Hater of factory farms, enjoyer of effective charities.

Comments
63

Ajeya Cotra writes:

I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.

While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come by in AI, especially outside of technical safety).

If you're a generalist working on AI because it's the most important thing, I'd seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:

  • You find yourself unsatisfied by how murky the theories of change are in AI world and how hard it is to feel good about whether your work is actually important and net positive
  • You have a hard sciences or engineering background, especially mechanical engineering, materials science, physics, etc (or of course a background in biology, though that's less necessary/relevant than you may assume!)
  • You want a vibe of solving technical problems with strong feedback loops rather than a vibe of doing communications and politics, but you're not a good fit for ML research

To be clear, bio is definitely not my lane and I don't have super deep thinking on this topic beyond what I'm sharing in this quick take (and I'm partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.

Like Ajeya, I haven't thought about this a ton. But I do feel quite confident in recommending that generalist EAs — especially the "get shit done" kind —  at least strongly consider working on biosecurity if they're looking for their next thing.

People who participate in talent development programs can go on to work in a variety of roles outside of the government and AI companies.

I had the enormous privilege of working at Giving What We Can back in 2021, which was one of my first introductions to the EA community. Needless to say, this experience was formative for my personal journey with effective altruism. I consider Luke an integral part of this.

I can honestly say that I've worked with some incredible and brilliant people during my short career, but Luke has really stood out to me as someone who embodies virtue, grace, kindness, compassion, selflessness, and a relentless drive to have a large positive impact on the world.

Luke: thank you for everything you've done for both GWWC and the world, and for the incredible impact that I'm confident you will continue to have in the future. I'm sad to imagine a GWWC without you at the helm, but I'm excited to see the great things you'll end up doing down the line after you've had some very well deserved time with your family.

“Pursuing an active campaign” is kind of a weird way to frame someone writing a few tweets and comments about their opinion on something

Hi Péter, thanks for your comment.

Unfortunately, as you've alluded to, technical AI governance talent pipelines are still quite nascent. I'm working on improving this. But in the meantime, I'd recommend:

  • Speaking with 80,000 Hours (can be useful for connecting you with possible mentors/opportunities)
  • Regularly browsing the 80,000 Hours job board and applying to the few technical AI governance roles that occasionally pop up on it
  • Reading 80,000 Hours' career guide on AI hardware (particularly the bit about how to enter the field) and their write up on policy skills

Hey Jeff, thanks for writing this!

I'm wondering if you'd be willing to opine on what the biggest blockers are for mid-career people who are considering switching to more impactful career paths — particularly those who are not doing things like earning to give, or working on EA causes?

Without getting into whether or not it's reasonable to expect catastrophe as the default under standard incentives for businesses, I think it's reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.

If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn't challenge the general trend of technology being good.

But I think it's also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the "tech is generally good" heuristic.

This is great! I love the simplicity and how fast and frictionless the experience is.

I think I might be part of the ideal target market, as someone who has long wanted to get more into the habit of concretely writing out his predictions but often lacks the motivation to do so consistently.

Load more