Hide table of contents

I was reading Will MacAskill’s recent EA Forum post when I came across a list of names. Will said this is a list of people who would definitely be considered “senior figures” within EA at the moment.

I really don’t mind that some of EA’s CEOs call each other to ask for advice or bounce ideas off of each other. I think it’s pretty healthy!

But also, I have never heard of at least half of these people??? I cannot be this out of the know, it’s embarrassing. So I did some digging.

Who are these “senior figures”?

Effective Ventures UK Trustees:

Will MacAskill: I know who this guy is. He was on a poster in my local tube station when his book What We Owe the Future came out. He also wrote Doing Good Better before that. Even before last summer’s media push, he was one of the most recognisable people in EA. He’s said he’ll be stepping down as a trustee after a few additional trustees join… but I assume people will still call him and ask for his opinion on things.

Claire Zabel: Senior Program Officer at Open Philanthropy.

Tasha McCauley: On the board of Effective Ventures UK. Her Linkedin shows she’s been working on AI stuff since at least 2010, but probably much earlier. Also happens to be married to Joseph Gordon Levitt.

Lincoln Quirk: Founder of Wave Mobile Money. Wave has been closely linked to the EA community since way back. (Edit: I previously incorrectly listed Lincoln under "Other".)

Nick Beckstead: Helped launch the Centre for Effective Altruism. Joined Open Philanthropy in 2014 (it was still part of GiveWell back then). Left OP to run the ill-fated FTX Future Fund.

Effective Ventures US Trustees:

Nick Beckstead: See above.

Eli Rose: Senior Program Officer at Open Philanthropy.

Nicole Ross: Community Health at CEA.

Zach Robinson: Interim CEO of Effective Ventures US, starting January 2023, as well as acting as a trustee. Apparently this is more normal in the US than it is in the UK. Previously Chief of Staff at Open Philanthropy.

CEOs/Directors:

Zach Robinson: See above - CEO and trustee of EV US.

Howie Lempel: Interim CEO of Effective Ventures UK, starting January 2023. Known for his appearance on the 80,000 Hours Podcast, talking about working at Open Philanthropy and 80,000 Hours with severe anxiety and depression.

Alexander Berger: Co-CEO of Open Philanthropy. Well-represented on Twitter (and the only “senior figure” to appear on both this list and its Twitter parody).

Holden Karnofsky: Co-CEO of Open Philanthropy. Went on a three-month sabbatical to explore whether he should move to working on AI safety starting on March 8th - no news yet on whether he’s returned to Open Philanthropy.

Ben West: Interim Executive Director of the Centre for Effective Altruism

Brenton Mayer: Interim CEO of 80,00 Hours. I’ve been to a few parties with him, which I highly recommend - he’s got a great sense of humour!

Other:

Max Dalton: Former Executive Director of CEA. In my view, he was a good one - CEA was as bit of a mess before he took over. Resigned in February 2023 and took on an advisory role instead, for the sake of his mental health. Still very involved, including in the search for his replacement.

Toby Ord: Founder of Giving What We Can. Author of The Precipice. I had heard of him before this too.

James Snowden: Program Officer at Open Philanthropy; previously GiveWell; previously CEA.

Ben Todd: Founder of 80,000 Hours. Currently on sabbatical.

Why does EA have senior figures that lots of us have never heard of?

The short answer is I’d barely heard of them too, so I’m not really qualified to say. But being unqualified has never stopped me from guessing! So my guess is some combination of:

It’s normal to have influential people just doing their daily jobs and no one really paying attention to them. Here in the UK, how many of us had heard of Chris Whitty, the Chief Medical Officer, before the televised Covid broadcasts? Definitely not me. But even though I wasn’t seeing him on TV, he was still having a major influence on UK politics before Covid started.

Some of these people probably didn’t want the spotlight. Maybe they had other career ambitions and didn’t want to advertise their involvement with EA. Maybe they’d seen EAs being mean to Will MacAskill on the Forum and didn’t want that for themselves. Maybe they were just shy. For whatever reason, it’s possible that some of these people were turning down podcasts invitations and not giving talks at EAG because they wanted to keep a low profile.

Attention follows power laws. This was a point Will made in the past and I agree completely, so I’ll just quote him here: “I don’t think we’re ever going to be able to get away from a dynamic where a handful of public figures are far more well-known than all others. Amount of public attention (as measured by, e.g. twitter followers) follows a power law.”

EA is still in the “group of friends just trying stuff” mindset even though it has far too much power at this point to be operating that way. And so we don’t always have the level of scrutiny or accountability that we should, because people who have been involved since 2014 are like, “Oh him? That’s just Bob. Everyone knows Bob.” But actually we don’t all know Bob anymore because there are like 10,000 of us now, not 100. The rest of Will’s post is about EA can be better about this, and I thought it was pretty good.

What’s going on with EA power dynamics over the next few months?

Several projects are going on at once. In roughly the order I predict they’ll happen:

New trustees are being recruited for Effective Ventures UK and US. The application process closed on June 4th. I expect it will take a while to announce the new trustees, but I also think this is a high priority, but so my prediction based on basically nothing would be an autumn announcement of some new trustees? Will MacAskill also plans to resign from his trusteeship once the new trustees are in post.

Claire Zabel, Max Dalton, and Michelle Hutchinson (not mentioned above - she works at 80,000 Hours) are leading the search for a new Executive Director for the Centre for Effective Altruism. They started a couple months ago, but EA job applications are famously slow, so who knows? I’m guessing also autumn, maybe closer to Christmas, but I’m hoping it will at least be a 2023 announcement.

Julia Wise, Sam Donald, and Ozzie Gooen are leading a project on reforms at EA organisations. They’re planning to make recommendations to EA organisations about ways they could change to, for example, be more friendly to whistleblowers. They’re planning to give public updates throughout but I wouldn’t be surprised if their final update isn’t until 2024. (None of these people are mentioned above either, but they each give bios in the linked Forum post. Julia is also known for her blog Giving Gladly and Ozzie is known for his Facebook page.)

I don’t when Zach and Howie will stop being interim CEOs of EV US and UK, respectively. I’m not sure they know either. I wouldn’t be surprised if this was in 2024.

I'm hoping we’ll see more organisations improving their governance structures and diversifying their board members. You can help by registering your interest in serving on a board with the EA Good Governance Project.

This post is excerpted from my weekly newsletter of EA-related miscellany, EA Lifestyles. You can subscribe for free at ealifestyles.substack.com.

63

0
0

Reactions

0
0

More posts like this

Comments27


Sorted by Click to highlight new comments since:

Fwiw I'm on the list and I haven't met everyone else on it even once. Most of the people didn't participate in EA Strategy Fortnight (participation in EA events organized by me being the best marker of seniority, of course).

I think part of the explanation is just that, as Will comments, EA is not that centralized, and so different people interact with different bits of it. My list probably would have been slightly different, as it sounds like yours is.

I don't think that "having more than 20 people" means it isn't centralised.

My claim was that different people interact with different parts of EA, which seems to mean it's decentralized? Perhaps I'm not using that word the same way you are though?

It depends on your conception of how it's centralized, but the person running CEA not having met some of the nominally central figures does push against at least one of "CEA is central" and "all the figures Will listed are central".

It's regrettable that the bio of Ben West doesn't mention his popular yet underrated TikTok account

I swear I didn't know he was a social media star!!

Thanks for writing this, very valuable and it's pretty close to the sort of thing I thought of when I saw MacAskill's line about the potential value of an 'intra-EA magazine'!

Thanks James, between EA Lifestyles and Asterisk Magazine I think we're well on our way!

the only “senior figure” to appear on both this list and its Twitter parody

Toby Ord seems to have as well?

Yep, I missed that!

EA is still in the “group of friends just trying stuff” mindset even though it has far too much power at this point to be operating that way. And so we don’t always have the level of scrutiny or accountability that we should, because people who have been involved since 2014 are like, “Oh him? That’s just Bob. Everyone knows Bob.” But actually we don’t all know Bob anymore because there are like 10,000 of us now, not 100. The rest of Will’s post is about EA can be better about this, and I thought it was pretty good.

Something about this jarrs with me and I don't know what. 

This is a very interesting comment and reaction. 

I know what Kirsten means - it does feel like "friends doing stuff" compared to the way some other big movements are run. I didn't read it as being jarring and I don't think it was intended as a massive criticism. 

BUT "friends doing stuff" is good. We need to be trying stuff. And friends who know and trust each other and have a network and knowledge and understanding and who talk to each other and come up with ideas of things to try and actually try them: that is great. That is what so many large R&D organisations dream of but can never achieve, because they get stuck in formal structures and rigid policies. The EA movement is still very young, we need this mentality. 

The alternative would seem to be to make trying new things harder. I'm not convinced that would be helpful. 

The middle ground is probably where at a certain point in scaling up ideas (e.g. based on spend) there could be more scrutiny. 

HOWEVER, where I don't necessarily agree with Kirsten (based on my very limited experience) is on the questions of scrutiny or accountability. Having spent my career outside the EA environment, I can honestly say I have never before seen a group of people who more actively seek scrutiny, put there ideas out there and ask people to shoot at them. 

I see organisations putting their research or action plans on here and saying "guys, this is what we plan to do - before we start, please tell us anything that you disagree with" and then engaging actively and constructively with all the feedback. 

Maybe there are some formal accountability structures missing (because many organisations are like start-ups rather than big companies) - but I don't think you want that to start too early. I can't really comment on this, but I would imagine that most organisations would have some kind of review before investing a lot of money in scaling an idea - but might be happy to give someone $1000 and a few weeks to go and try something. 

The way I've written it doesn't sit right, or the dynamic itself?

I think your representation doesn't seem correct to me, though I can't figure out how it's wrong.

Maybe:

  • It seems like these are talented people with a lot of experience
  • It is not surprising they have often been in the movement for a while - trust takes time to build - hence there are fewer candidates
  • I would like better elites but it's not clear to me how we get them, the process of choosing people who have power is just really difficult. 

I think there is also something vaguely similar to the Matthew effect that goes on. I'm not particularly confident in this, and I'm not sure that I fully endorse it.

People who got involved X years ago have gotten a network, specialized skills, and knowledge that is unavailable to others (or at least is harder for others to get). They had the 'first mover advantage.' Over years of attending many conferences (including those that some people get automatic or nearly-automatic admissions to as a result of their employer), retreats, building a reputation, and just random "hallway conversations" an initially small gap has become quite wide.

The simplest example in my mind is to imagine someone who had been involved in EA since 2014-2019 recommending that you upskill by taking a workshop from CFAR, or that you attend an EAG. But CFAR doesn't offer workshops anymore, so that option for training/upskilling/networking is literally not available anymore, and you likely won't get accepted to EAG if you don't look impressive enough (according to particular criteria).

you likely won't get accepted to EAG if you don't look impressive enough

While I think this sort of true, reading the linked article might give you the impression the bar is much higher than it is?

  • I know many people who've recently been accepted to EA conferences with much less impressive or EA-relevant backgrounds. If it were just this I would say that it's hard to make a perfect process and there will always be some false positives and false negatives.

But:

  • There's something important missing from their description of their experience. They wrote, responding Amy, the head of the CEA Events team, "from our conversation, I came to understand that there is a distinct reason that could be pointed to for my rejection from EAG" but then they don't disclose that reason and, citing privacy, neither will Amy.

Yeah, admissions is complicated. And writing "you likely won't get accepted to EAG if you don't look impressive enough" is a vast simplification. In reality I imagine that it is some nebulous combination of traditional impressiveness, EA-specific impressiveness, and potential future contribution (all from the perspective of the admissions team). But like many things in life, I'm guessing that the decisions often come down to judgement calls, rather than strict and clear decision tree.

In a vague parallel to university admissions, there isn't a simply standard or algorithm (such as "a function of high school grades and standardized test scores"), and instead it is really a judgement call for each individual applicant. In another parallel to university admissions, sometimes the star trombone player is graduating and the school really needs a good trombone player. I imagine similarly, there are priorities for EA conferences that aren't transparent/visible to the public: maybe the person doing X will be resigning soon, so there is a big push to nurture more talent doing X to find a replacement.

Sol3:2
-12
4
19

thankyou! I keep pointing out that EVF doesn't have anything on their website about who their trustees are, and this isn't good for transparency.

As much as I appreciate this post, it seems to have followed the same process I have - asked 'ok who actually are these senior figures?', and realised that for some, there's not much info out there, and then gathered what little there is on google search and linkedin.

It would be great if the lesser-known EVF trustees could describe, in their own words, who they are, what they do, and how someone could contact them.

[This comment is no longer endorsed by its author]Reply

Maybe I'm misunderstanding what you want EVF to do, but when I go to the page that you linked to I see a list of trustees with bios at the bottom of it. (This doesn't solve the "contact" problem, but it does solve "who they are and what they do".)

My sincere apologies, I had missed that it had been updated! V. Embarrassing. Thankyou for doing that

Curated and popular this week
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Recent opportunities in Community
46
Ivan Burduk
· · 2m read