We, on behalf of the EV US and EV UK boards, are very glad to share that Zach Robinson has been selected as the new CEO of the Centre for Effective Altruism (CEA).
We can personally attest to his exceptional leadership, judgement, and dedication from having worked with him at Effective Ventures US. These experiences are part of why we unanimously agreed with the hiring committee’s recommendation to offer him the position.[1] We think Zach has the skills and the drive to lead CEA’s very important work.
We are grateful to the search committee (Max Dalton, Claire Zabel, and Michelle Hutchinson) for their thorough process in making the recommendation. They considered hundreds of potential internal and external candidates, including through dozens of blinded work tests. For further details on the search process, please see this Forum post.
As we look forward, we are excited about CEA's future with Zach at the helm, and the future of the EA community.
Zach adds: “I’m thrilled to be joining CEA! I think CEA has an impressive track record of success when it comes to helping others address the world’s most important problems, and I’m excited to build on the foundations created by Max, Ben, and the rest of CEA’s team. I’m looking forward to diving in in 2024 and look forward to sharing more updates with the EA community.”
- ^
Technically, the selection is made by the US board, but the UK board unanimously encouraged the US board to extend this offer. Zach was recused throughout the process, including in the final selection.
I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals' posts on social media can attest), in particular in Europe, DC, and the Bay Area.
I appreciate Akash's comment, and at the same time, I understand the object of this post is not to ask for people's opinions about what the priorities of CEA would be, so I won't go too much into detail. I want to highlight that I'm really excited for Zach Robinson to lead CEA!
With my current knowledge of the situation in three different jurisdictions, I'll simply comment that there is a huge problem related to EA connections and AI policy at the moment. I would support CEA getting strong PR support so that there is a voice defending EA rather than mostly receiving punches. I truly appreciate the CEA's communication efforts over the last year and it's very plausible that CEA needs more than one person working on this. One alternative is for most people working on AI policy to cut their former connections to EA which I think is a shame due to the usually good epistemics and motivation the community brings. (In any case, the AI safety movement should become more and more independent and "big tent" as soon as possible and I'm looking forward to more energy being put into PR there.)