JulianHazell

1934 karmaJoined Dec 2020Pursuing a graduate degree (e.g. Master's)Working (0-5 years)

Bio

Academically/professionally interested in AI governance (research, policy, communications, and strategy), technology policy, longtermism, healthy doses of moral philosophy, the social sciences, and blog writing.

Hater of factory farms, enjoyer of effective charities.

julian[dot]hazell[at]mansfield.ox.ac.uk

How others can help me

Reach out to me if you want work with me or collaborate in any way.

How I can help others

Reach out to me if you have questions about anything. I'll do my best to answer, and I promise I'll be friendly!

Comments
58

Hi Péter, thanks for your comment.

Unfortunately, as you've alluded to, technical AI governance talent pipelines are still quite nascent. I'm working on improving this. But in the meantime, I'd recommend:

  • Speaking with 80,000 Hours (can be useful for connecting you with possible mentors/opportunities)
  • Regularly browsing the 80,000 Hours job board and applying to the few technical AI governance roles that occasionally pop up on it
  • Reading 80,000 Hours' career guide on AI hardware (particularly the bit about how to enter the field) and their write up on policy skills

Hey Jeff, thanks for writing this!

I'm wondering if you'd be willing to opine on what the biggest blockers are for mid-career people who are considering switching to more impactful career paths — particularly those who are not doing things like earning to give, or working on EA causes?

Without getting into whether or not it's reasonable to expect catastrophe as the default under standard incentives for businesses, I think it's reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.

If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn't challenge the general trend of technology being good.

But I think it's also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the "tech is generally good" heuristic.

This is great! I love the simplicity and how fast and frictionless the experience is.

I think I might be part of the ideal target market, as someone who has long wanted to get more into the habit of concretely writing out his predictions but often lacks the motivation to do so consistently.

Does GWWC currently have a funding gap?

How much would you need to fund the activities you’d ideally like to do over the next two years?

(This can include current and former team members)

When are you gonna go on the 80,000 Hours podcast, Luke? :)

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

This is great, thanks for the change. As someone who aspires to use evidence and careful reasoning to determine how to best use my altruistic resources, I sometimes get uncomfortable when people call me an effective altruist.

Load more