Hide table of contents

This is a Draft Amnesty Week draft. It may not be polished up to my usual standards. 

I originally started this post for the EA forum's career week last year, but I missed the deadline. I've used Draft Amnesty Week as a nudge to fix up a few bullets and am just sharing what I got. 

In which: I tentatively conclude I made the right choice by joining CEA instead of doing independent alignment research or starting my own EA community building project.

In December and January last year, I spent a lot of time thinking about what my next career move should be. I was debating roughly four choices:

  1. Joining the CEA Events Team
  2. Beginning independent research in AI strategy and governance
  3. Supporting early stage (relatively scrappy) AI safety field-building efforts
  4. Starting an EA community or infrastructure building project[1]

I decided to join the CEA events team, and I’m glad I did. I’m moderately sure this was the right choice in hindsight (maybe 60%), but counterfactuals are hard and who knows, maybe one of the other paths would have proved even better

Here are some benefits from CEA that I think would have been harder for me to get on the other paths.

  • I get extended contact with—and feedback from—very competent people
    • Example: I helped organize the Meta Coordination Forum and worked closely with Max Dalton and Sophie Thomson as a result. I respect both of them a lot and they both regularly gave me substantive feedback on my idea generation, emails, docs, etc.
  • I learn a lot of small but, in aggregate, important things that would be more effortful to learn on my own
    • Examples: How to organize a slack workspace, how to communicate efficiently, when and how to engage with lawyers, how to utilize virtual assistants, how to build a good team culture, how to write a GDoc that people can easily skim, when to leave comments and how to do so quickly, how to use decision making tools like BIRD, how to be realistic about impact evaluations, etc.
  • I have a support system
    • Example: I’ve been dealing with post-concussion symptoms for the past year, and having private healthcare has helped me address those symptoms.
    • Example: Last year I was catastrophizing about a project I was leading on. After telling my manager about how anxious I had been about the project, we met early that week and checked in on the status of all the different work streams and clarified next steps. By the end of the week I felt much better.
  • I think I have a more realistic model of how organizations, in general, work. I bet this helps me predict other orgs behavior and engage with them productively. It would probably also help me start my own org.
    • Example: If I want Open Phil to do X, it’s become clear to me that I should probably think about who at OP is most directly responsible for X, write up the case for X in an easy to skim way with a lot of reasoning transparency, and then send that doc to the person and express a willingness to meet to talk more about it.
      • And all the while I should be nice and humble, because there’s probably a lot of behind the scenes stuff I don’t know about. And the people I want to change the behavior of are probably very busy and have a ton of daily execution work to do that makes it hard for them to zoom out to the level I’m likely asking them to
    • Example: I better understand the time/overhead-costs to making certain info transparent and doing public comms well, so I have more realistic expectations of other orgs.
    • Example: If I were to start my own org, I would have a better sense of how to set a vision, how to ship MVPs and test hypotheses, as well as more intuitive sense of when things are going well vs. poorly.
  • If I want to later work at a non-EA org, my experience would probably be more legible than independent grant-funded experiments
  • I’ve learned a surprising amount about object-level problems, particularly AI safety[2]
    • Examples: working at the Summit in Existential Security gave me an overview of key topics discussed there and a map of stakeholders.
    • Examples: I’m privileged to more private write-ups because people have a relatively credible signal about my competence and good-faith intentions.
    • Example: I have professional development time and encouragement to do courses like AGI Safety Fundamentals.

(FYI, nobody at CEA told me to write this post.)

A note on changing the world

One career heuristic I heard articulated from a colleague that I think about a lot now is that it’s really hard to change the world.

Changing the world requires an immense, concentrated, and prolonged effort

  • This is typically the type of effort that one person can’t do alone, so you need a group of people pushing in the same direction (ie., an org)
  • Also, for many levers that are at ‘change the world’ scale, it may actually take a while to hit diminishing returns, and there can be long increasing returns to scale/ systematization.
    • E.g., The cost per EAGx event was high at the start of the program, but now the EAGx lead (Ollie Base) has experience with helping local organizers to run EAGxs and the seasoned EAGx handbooks means he can run 7+ EAGxs a year and still support other CEA events on the side. The quality of these EAGxs haven’t changed much, but the cost per event is now much lower.
  • While I find independent projects flashy and exciting, I think I need to be honest with myself about where the impact is coming from.
    • Unless they’re testing the fit of a project that could change the world, teaching me the skills to later change the world, or making me more a competitive applicant in a career where I think I can change the world, they’re probably not worth doing.
      • Note that independent projects in my early career were very helpful for me getting the job I currently have.
      • I’m not anti independent projects, I just think most of the value you get from them will come when you’re no longer independent
  1. ^

     Eg., building something analogous to my previous project eaopps.com

  2. ^

    I probably learned less than my counterfactual paths, but also hard to say.





More posts like this

Sorted by Click to highlight new comments since:

Thanks for writing this! Indeed counterfactuals are hard. I have also joined a large EA org (Rethink Priorities) and so far agree it is useful. I think a possible failure mode for me is that I am a bit risk-averse, and also just really like working with EAs, so I'm guessing if in X months/years time I have the option to go off and start/do something by myself or with a small group I might be reluctant to leave a nice, comfortable, convenient, EA org like RP. But I agree there are lots of advantages to working at an established org, at least for a while at the start of my career.

Really well written! The reasoning transparency you practiced on the job was no joke

I enjoyed reading this post! 

My question is on a small topic though, what is a BIRD decision making tool? A google search resulted in very little useful links. 

Thanks! It's something very similar to the 'responsibility assignment matrix' (RACI) popularized by consultants, I think. But in this case BIRD is more about decisions (rather than tasks) and stands for Bound (set guidelines), Input (give advice), Responsible (do the bulk of thinking decision through and laying out reasoning), and Decider (make decision). 

Thank you! Seems like a valuable tool to learn! 

More from michel
Curated and popular this week
Relevant opportunities