Hide table of contents

(Follow-up to: Want advice on management/organization-building?)

After posting that offer, I've chatted with a few people at different orgs (though still have bandwidth for more if other folks are interested, as I find them pretty fun!) and started to notice some trends in what kinds of management problems different orgs face. These are all low-confidence—I'd like to validate them against a broader set of orgs—but I thought I'd write them up to see if they resonate with people.

Observations

Many orgs skew extremely junior

The EA community skews very young, and highly engaged EAs (like those working at orgs) skew even younger. Anecdotally, I was the same age or older than almost everyone I talked to, many of whom were the most experienced person in their org. By comparison, at Wave almost the entire prod/eng leadership team is my age or older. (Note that this seems to be less true in the most established/large/high-status orgs, e.g. Open Phil.)

This isn't a disaster, especially since the best of the junior people are very talented, but it does lead to a set of typical problems:

  • having to think through how to do everything from first principles rather than copy what's worked from elsewhere
  • managers needing to spend a fair amount of time providing basic "how to be a functioning employee" support to junior hires
  • managers not providing that support and the junior hires ending up being less effective, or growing less quickly, than they would at an org that could provide them more support

(At Wave, we've largely avoided hiring people with <2 years of work experience onto the prod/eng team for this reason. Of course, that's easier for us than for many EA orgs, so I'm not suggesting this as a general solution.)

It also leads to two specific subproblems:

Many managers are first-time managers

Again, this isn't a disaster, since many of these first-time managers are very smart, hardworking and kind. But, first-time managers have a few patterns of mistakes they tend to make, mostly related to "seeing around corners" or making decisions whose consequences play out on a long time horizon (understandably, since they often haven't worked as a manager for long enough to have seen long-time-horizon decisions play out!). This is things like:

  • Providing enough coaching and career growth for their team
  • Giving feedback that's difficult to give
  • Making sure their reports are happy in their roles and not burning out

The last point seems especially underrated in EA, I suspect because people are unusually focused on "doing what's optimal, not what's fun." That's a good idea to a large extent, but even for many people who are largely motivated by impact, they can be massively more or less productive depending on how much their day-to-day work resonates with them. But I suspect many EAs, like me, are reluctant to admit that this applies to us too and we're not purely impact-maximizing robots.

Almost all managers-of-managers are first-timers

Going from managing individual contributors to managing managers is a fairly different skillset in many ways. Most people in EA orgs that are managing managers seem to be "career EAs" for whom it's their first manager-of-managers role. As above, new managers of managers tend to make some predictable mistakes, e.g.:

  • Not planning far enough ahead for what hires they'll need to make to support their org's growth
  • Hiring/promoting the wrong people into management roles (having become a good manager doesn't necessarily mean you'll be good at coaching/evaluating other managers who fail in different ways than you did!)
  • Not noticing team dysfunction, either due to not having systems in place (e.g. skip-level 1:1s) or not knowing that something is a red flag

Many leaders are isolated

Compared to the for-profit world, EA seems to have a much smaller proportion of people working at large organizations, and a larger proportion working at small/mid-sized orgs. This means that many managers / managers-of-managers are the only person in their org with that job, and don't have other peers they can rely on for support, sanity checking or talking through a difficult decision.

Many leaders are reluctant managers

This is a problem in the for-profit world as well—the classic example is promoting your best engineer into management despite the fact that they have no people skills. In EA, it's often more like promoting your best philosopher, but the results are similar.

This sometimes works well, if the person being promoted is motivated to work hard at becoming a great manager / organization-builder, but if they're not, it often ends up with them burning out, and their team having culture or execution problems.

We don't know how to structure teams

In engineering management, the area I'm most familiar with, there's a set of fairly well-known ways to structure organizations and teams to mitigate the "reluctant manager" problem—for example, splitting up engineering leadership between one person who focuses people management and one who focuses on technical decision-making and strategy.

Unfortunately, most EA teams aren't doing very much software engineering; instead they're doing activities like research or grantmaking where there are fewer large orgs whose example we can learn from. As a result, we have less of a well-developed idea of what these teams should look like, and often end up putting all the leadership responsibility on one person, who either drops the ball on part of it or ends up stretched very thin.

Some ideas for improvement

  • I'm thinking about creating an "EA managers" slack similar to the Operations one, where people can get advice/support from managers in other orgs. Let me know if you're interested in this!
  • I think the advising I've done so far has been pretty helpful to people. I'm happy to do more of this, and I also suspect that non-EA management/leadership coaches would be valuable for people as well; I don't feel like the fact that I'm familiar with the EA community comes up that much.
  • I suspect more orgs/teams could use a "COO"-type figure who offloads people-management responsibilities from the person who's responsible for vision/strategy/fundraising/etc. This is a difficult role to hire for, but with the right pairing, it can make the team dramatically more effective.

I'm curious whether leaders at other EA orgs resonate with these observations, have other patterns to add that they've noticed, or have ideas about potential mitigations!

Comments26


Sorted by Click to highlight new comments since:

I think the problem of "EA leadership doesn't consider it important to hire experienced people even for people who are leading a project (who in turn don't consider it important to hire experienced people)" is a root cause problem for a lot of somewhat negative things going on in EA (which is nobody's fault, but could be useful to improve, I think).

I regret that I have but one strong upvote to give this comment. I think this is a huge problem among EA orgs, to the extent that my impression is that moderate incompetence has basically been the norm at many of them.

It obviously seems like a good idea to try and improve the resources we've got, but I desperately wish EA orgs would spend more effort trying to attract real experience. <Management talent + management experience> is always going to outperform <management talent>.

Why do you think people think it's unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)?

Examples:

  • A funded, reputable, [important in my opinion] EA org that I helped a bit with hiring an engineer for a [in my opinion] key role had, on their first draft, something like "we'd be happy to hire a top graduate from a coding bootcamp"
  • I spoke to 2-3 senior product managers looking for their way into EA, while at the same time..:
    • (almost?) no EA org is hiring product people
    • In my opinion, many EA orgs could use serious help from senior product people

(Please don't write here if you can guess what orgs I'm talking about, I left them anonymous on purpose)

 

From these examples I infer the orgs are not even trying. It's not that they're trying and failing due to, for example, an age skew in the community.

 

I also have theories for why this would be the case, but most of my opinion comes from my observations.

I have somewhat of a problem writing such examples publicly since I'm afraid to imply that specific people are not good enough at their job, which I really don't want to do. (And so this problem remains hidden from most of the community, which I think is a shame)

 

Maybe you (Ben, the author) could figure out, for the people/positions where you think it would be better to have someone with a lot of experience, how the hiring process looked. Did they  try [reasonably in your opinion] reaching out to very senior people?

Yeah, lot of the issues in EA are things I recognise from other fields that disproportionately hire academic high achievers straight out of college, who don't have much real world experience, and who overestimate the value of native intelligence over experience. But conveying the importance of that difference is difficult as, ironically, its something you mostly learn from experience. 

Very interesting thread. I'm an non-EA experienced manager with a successful team-building company and was looking into how to help EA orgs with team-building, but it turns out I might be more useful as a manager coach?

I started managing teams 15 years ago and eventually left the corporate world to be a tour guide. Covid forced me back to the manager role and founded my current startup, woyago, which is almost on autopilot.

My Linkedin Profile: https://www.linkedin.com/in/antomontani/details/experience/

I have free time and would be happy to offer advice for those of you looking for help on management.

Example areas I might have useful input on (copying heavily from your post Ben!):

My Calendly can be found in my bio.

Happy to (finally!) find a way to add impact to my life by helping you.

Consider posting about it!

Post summary (feel free to suggest edits!):
The author’s observations from talking to / offering advice to several EA orgs:

  • Many orgs skew heavily junior, and most managers and managers-of-managers are in that role for the first time.
  • Many leaders are isolated (no peers to check in with) and / or reluctant (would prefer not to do people management).

They suggest solutions of:

  • Creating an EA manager's slack (let them know if you’re interested!)
  • Non-EA management/leadership coaches - they haven't found most questions they get in their coaching are EA-specific.
  • More orgs hire a COO to take over people management from whoever does the vision / strategy / fundraising.
  • More orgs consider splitting management roles into separate people management and technical leadership roles.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Thanks for writing all of this Ben. I agree with everything you have said and like your ideas. I also think that we should:

  • nudge more EAs to get research etc management experience in industry before returning to work in new EA organisations.
  • have a management consultancy type org or network for providing fractional management oversight/advice to new organisations. For instance, 10 hours a week to advise how to set up a research team or to sit in on supervision meetings and pass on best practices.
  • have a norm that managers at places like Rethink occassionally do placements at other research organisations to pass on their knowledge and best practice
  • have some people who are involved across many established and new research orgs (maybe as funders) and incentivised to understand and speed up the dissemination of collective best practice (e.g., via talk with CEOs/founders or writing up what they have learned)
[anonymous]11
2
0

Thank you so much for sharing Ben! I'm glad to hear the calls have been fun.

What you described fits my observations so far. I also think that management coaching is probably one of the key "interventions" here. As with any skill, a great deal of learning how to be a (good) manager is learning a set of new behaviors. Having someone to reflect on the development of those behaviors and the respective decision-making can be extremely helpful.

Would one possible solution to some of these problems be to hire much more outside EA? Move "familiarity with EA" to the "bonus" part of the job requirements, and instead look for experienced managers as a main criterion?

Thanks so much for doing this!

Nitpick: the "advice I've given people so far" link is broken.

Whoops! Fixed, it was just supposed to point to the same advice-offer post as the first paragraph, to add context :)

I feel like a lot of this is downstream from people being reluctant to hire experienced people who aren't already associated with EA. Particularly for things like operations roles experience doing similar roles is going to make far more of a difference to effectiveness than deep belief in EA values. 

 

When Coke need to hire new people they don't look for people who have a deep love of sugary drinks brands, they find people in similar roles for other things and offer them money. I feel like the reason EA orgs are reluctant to do this is that there's a degree of exceptionalism in EA. 

I agree that it's downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don't really need it, doing things that are short-term good but long-term bad (with the assumption that they'll have moved on before the bad stuff kicks in), etc. (cf. the book Moral Mazes.) Hiring mission-aligned people is one of the best ways to provide a check on that type of behavior.

*I think some orgs maybe should be more open to hiring people who are aligned with the org's particular mission but not part of the EA community—eg that's Wave's main hiring demographic—but for orgs with more "hardcore EA" missions, it's not clear how much that expands their applicant pool.

In fortune 500 companies, rarely you find people that are exceptional on the get go. Most of them that have succeeded were allowed to grow themselves within parameterized environments of multidisciplinary scope so they can have the room to combine ideas.

Can EA develop the EA/ longtermist attitude in exceptionally talented people? I believe digging this question brutally can point every EA founder /Directorship role on how to deal with developing management talent..

It's pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission.

In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org.

What qualifies as 'a (sufficient) amount of value alignment'? I worked with many people who agreed with the premise of moving money to the worst off, and found the actual practices of many self-identifying EAs hard to fathom.

Also, 'it's pretty common' strikes me as an insufficient argument - many practices are common and bad. More data seems needed.

You're correct about all of this. I developed a fellowship program to help orgs specifically with upskilling and having the support they need to do well. I believe that the 3 critical ingredients to running an org successfully are: a) having the right knowledge b) having peer support and c) having a mentor and accountability. My personal mission is to help orgs succeed. You can find out more information on my website, or shoot me a PM / email.
I'm also working on developing an organization to consolidate all the org support resources - I've done this in the small business sector, and am applying the principles to EA. Would love to connect with anyone who wants to be a part of it.

Very good article! Many EA orgs are start-ups so it’s natural for them to start with a relatively inexperienced team. However, it’s normal for start-ups to bring in senior people at the top as they grow rather than hiring from the bottom, which is what most EA orgs seem to. The best CEO for a 5-person organisation is rarely the same as the best CEO for a 100-person organisation. CEOs need to get better at recognising their own weaknesses and Boards need to get better at transitioning founders out of the CEO role. In my opinion, EA orgs also need to increase the weight of experience and decrease the weight of moral philosophy / being in certain cliques.

I also wonder if commercial Boards understand that growing the business is ultimately more important than looking after the CEO emotionally or similar. So they exert pressure to perform, including potentially 'levelling' senior staff, that nonprofits are less likely to do.

Couple that with a lack of supply for these jobs (I don't know how many senior people are clamouring to work for EA orgs, versus people who want to be CEO of a 100-person company) and I think it leads to stagnation.

I've had vaguely similar thoughts about youth and levels of professional experience, but you have articulated this much better than I have. Thanks for writing this.

I'd be very happy to see a "EA managers" Slack (or some other forum/conversation space/community), and I would be very happy to join.

I'd also be happy to join an "EA managers" Slack!

I'd also be interested. Seems pretty necessary at this point. I could also help out with it more generally. 

Thank you for a great post. As another person in the experienced-manager-outside-EA group, but now moving more into this world as a manager, I would definitely be interested in a space to discuss management with other managers. I had a peer network like this in a previous management position, and it was quite helpful.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na