Hide table of contents

Epistemic status: I lean toward 'yes, in some cases' - but a deeper dive into the question could be a valuable though potentially large research project. In this post I’ll just provide some intuitions for why I think it could be important question to ask.

Introduction

I recently argued that EA organisations should outsource their tech work to nonprofit agencies. This got me wondering if the arguments more generally imply that EA organisations should strive to divide themselves to get as close as possible to having a single responsibility.[1]

Splitting as often as possible

Higher fidelity funding signals

Let’s imagine a hypothetical charity, Artificial Intelligence & Malaria Longtermist Exploration & Shorttermist Solutions (AIMLESS), that spends 50% of its effort on distributing bednets with world-class efficiency and 50% on performing mediocre AI research.

If we believe the EA space benefits from wisdom of the crowd, then that wisdom is purer the easier it is for EAs to donate money to specific causes or projects they believe in.[2] If AIMLESS is the best bednet-distributing organisation, it will be difficult or impossible to tell in what proportion its donors are supporting the bednets or the AI research - and which one puts off non-donors who might have supported one of the programs.

Better inter-organisational comparisons

We tend to think of the competition between for-profits as allowing an organic comparison that isn’t possible between nonprofits, but enthusiastic splitting could turn that on its head. Value-aligned nonprofits can make comparison far clearer if they split themselves enough that what they’re doing is directly comparable.

This needn’t mean they necessarily compete - multiple tech nonprofit agencies could exist in the EA space, each focusing on different cause areas, for example. But the more granular comparability would make it much easier to diagnose when one is outperforming the other, and much easier to fix the problem. The stronger org could share its practices with the weaker one to start with, and if that didn’t work, could start competing directly for funding - allowing underperforming organisations the crucially important function of ceasing to exist. Then if one went to zero funding, the other would aim to split into two as soon as possible so the ecosystem continued with minimal interruption (unless the defunct one’s issue had been that it wasn’t in a viable niche).

If one were inclined toward awful neologisms, one could call this coopetition.

More efficient allocation of time for support staff

In principle, extracting support departments from multiple organisations into a smaller number of consultancies or agencies allows the same number of staff to produce significantly more value - I showed one way one could quantify that benefit here. To give a quick summary: if we assume that the amount of valuable work for any given department fluctuates over time and that their stated goals have similar value, then combining multiple departments into a single external organisation means they can always prioritise the highest priority work from any of their dependent organisations.

These agencies could either be client-funded or donor-funded, in the latter case doing some amount of their own internal prioritisation. I discussed the advantages of each approach here. A third option would be to determine their priorities via regranting orgs who specialise in prioritising among subareas, and who distribute grants among those so as to optimise incentives - basically mimicking the operations and management of a large unitary organisation.[3]

But no oftener

If the titular hypothesis is true, it should start conversations, not end them. In software development, where the single responsibility principle is fairly uncontroversial, understanding how to apply it well is an art form. It’s often better to delay a split even when you know one should happen because you expect to learn more about how to split by waiting. So this post is a call for greater scrutiny of organisations’ focus, not for any specific chopping plan. Here’s some considerations which might go either way…

Synergies

Combining functions can allow organisations to develop internal synergies; separating functions can create overhead. An organisation could have many different departments, but if those are all in service of a coherent goal, splitting may be unwise. Sometimes a large responsibility might necessitate a large organisation.

Having said that, splitting organisations needn’t remove close cooperation or even physical proximity between their former departments - it just makes it an option, rather than a necessity. It also needn’t remove any functions. If the judgement of the executives of a multi-responsibility organisation was good enough that people wanted to continue to fund their judgement, those executives could still become, say, one or more regranting/consulting/management organisations - but one that could now be evaluated separately from the direct workers.

promotion in Starship Troopers

This kind of philosophy (embedded video link)

Setting up a single charity is a pain. Setting up multiple charities is multiple pains. This could be offset somewhat by having easy access to a specialised EA-focused legal agency, or by just not becoming charities if most of their funding would be granted by donor charities (meaning there wouldn’t be a tax incentive for charitable status).

Restricted donations as an alternative

This is possible and in practice common, but seems to lose many of the benefits. Suppose AIMLESS considered restricted donations as an alternative to splitting. The former would have at least two issues that the latter would avoid:

  • Counterfactual fungibility: if Alice gives $x restricted to their malaria program and Bob then gives $x unrestricted, it seems likely the charity will divert all the latter to its AI program, leading to the same outcome as if both donations had been unrestricted. This is less of a problem the more of its grants are restricted, but the organisation will always be incentivised to seek unrestricted grants or to seek specific ‘restrictions’ that match its intentions.
  • Logistical ambiguity: assuming AIMLESS has any support staff or other workers whose time is naturally split between both projects, even with the most scrupulous intentions it’s going to be very hard to ensure they split their attention between projects in proportion to the donation restrictions.

Classes of division

One could slice up a nonprofit at least two ways - by shared services such as tech, marketing, legal, HR, design and perhaps some types of research, and by focused services or products, such as distributing bednets. The arguments above apply to both, but the logistics of splitting them might vary predictably.

Shared services

  • We’re already familiar with the concept of shared service agencies and consultancies; we seem to be increasingly viewing them as a good thing,[4] and such services are already being set up.
  • These services are often essential - an org often cannot forego legal input, hiring or having a website.

So shared service agencies can be established before orgs split off their service departments, and would need to be - unless the org converted its department into the first such agency.

Products

  • It will rarely seem appealing to create an organisation to do work that an existing organisation already covers. The new organisation won’t be directly comparable to the existing organisation, removing the advantages of competition.
  • As long as a product-oriented org has at least one product, having other distinct products is optional.

So splitting orgs by product doesn’t create new dependencies between separate organisations, though it probably requires more initiative to do.

Real world splitting

Splitting services

Any EA org that has an ‘X-service-department’ of at least 1 person should arguably split. If we want to encourage splitting, we might also want to develop encouraging community norms, e.g. praising those who try it, or more concretely having shared service agencies prioritise newly split organisations.

Splitting products

Product-splitting is more awkward, both because it’s harder to describe without singling out specific orgs, and because it’s harder to identify when multiple products constitute a single ‘responsibility’. A good sign that you might lean towards splitting is that there’s no succinct way of describing the sum of all your products without using the word ‘and’. A good sign that you might lean away from splitting is that, if you were interviewing someone for work on Product B, one of the most practically valuable qualifications they could have is previous work on Product A and vice versa (example, further discussion). Recombining later is always possible, and perhaps less subject to status quo bias.

But if the hypothesis of this post is correct, organisations should err towards splitting when in doubt. I offer for discussion, but don’t want to commit either way on the following examples:

CEA, whose products* include

  • Community building grants
  • Group support
  • The EA forum
  • Other EA websites
  • Running events
  • Supporting EAGx events
  • Community health
  • Media training, in some capacity

80k, whose products* include

  • General careers advice (web content)
  • Bespoke careers advice (1-on-1s)
  • Careers research
  • Podcast
  • Job board All with a broad longtermist focus.

Founders Pledge, whose products* include

  • Philanthropic advisory
  • Cause area research
  • Multiple DAFs
  • Pledges
  • Events
  • Community building

Rethink Priorities, whose products* include

  • Explicitly longtermist research
  • Explicitly shorttermist research

*Some of these are more like services, but since the case for splitting services off seems both stronger and clearer than products, it’s not important to distinguish here.

Summary

To paint a clear picture, a post-split org would likely start out with mostly the same people working in the same shared office (if they had one) on the same projects, with the same product staff seeking project management from the same managers.

The immediate differences might be a) that for services for which an agency already existed, the relevant service staff would need to apply to join that agency or be retained initially as contractors, with the intention of the org ultimately moving to the agency for the service; b) there would be some setup hassle with registering new organisations etc (but resolving this would be a priority for the rest of the community).

But these new orgs would be decoupled from each other, such that over time they could try working with different orgs, each seeking the optimal relationships for their specific role.

Finally, here’s an illustration of what this future version of the effective altruism ecosystem could look like:

future of EA

Acknowledgements

Thanks to Evan Chu, Emrik Garden, Jonas Wagner, Krystal Ha, Siao Si Looi, Dony Christie, Jonas Moss, Tazik Shahjahan and Onni Arne for input on this post. Needless to say, mistakes are entirely my cat's fault.


  1. Much of the argument here maps onto the single responsibility principle in object-oriented software design. This states that each object or class should have a single concern. ↩︎

  2. This holds both ‘vertically’, as in allowing EAs to choose the level of specificity at which they feel comfortable targeting their support (eg shorttermist < global poverty < diseases < neglected tropical diseases < schistosomiasis < deworming project), and ‘horizontally’, as in allowing EAs to choose the focus at their given level of specificity (eg within global poverty, diseases=education=womens’ empowerment=mental health). ↩︎

  3. The purported benefits of the single responsibility principle are semi-intentionally analogous to those I describe here:

    • ‘Easier to Understand’ - cf section 1
    • ‘Easier to maintain’ - cf section 2(kinda)
    • ‘More reusable’ - cf section 3

    While I’m somewhat suspicious of my epistemics here, having come from a software development background and just happened to see it this way, I do think there’s some justification for this analogy. To wit, computer programs have a single goal or clearly aligned set of goals, which all of their modules are set up to serve. The same can’t be said of the for-profit space or of non-EA nonprofits, but it’s uniquely true - at least truer - of the EA space. While EAs inevitably have some value disparity, it’s generally smaller, more explicit and more compartmentalisable - as in, animal welfarists, global povertyists and longtermists are quite closely value aligned among themselves, even when not with each other. So even if the EA space as a whole isn’t value-aligned enough for such a model (and I’m not sure it isn’t at the organisational level), it’s easy to identify subsets of it which are. ↩︎

  4. ‘Agency’ vs ‘consultancy’ is a vague distinction, but I’m using the former for consistency with the rest of this post: a consultancy typically provides a superset of the services an agency, so where those services aren’t intrinsically linked, the argument of the rest of this post would apply to splitting the consultancy. ↩︎

12

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

If we believe the EA space benefits from wisdom of the crowd, then that wisdom is purer the easier it is for EAs to donate money to specific causes or projects they believe in.

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is. On the one hand this may not be a problem if one could delegate their voice/donation to others in some or all areas, or if grantmakers could handle a pool of donors, each with different preferences with different levels of granularity. On the other hand, maybe people will be too confident in their preferences (regarding altruism - so, other people's needs and preferences).

Combining functions can allow organisations to develop internal synergies; separating functions can create overhead... Having said that, splitting organisations needn’t remove close cooperation or even physical proximity between their former departments - it just makes it an option, rather than a necessity.

First of all, relevant xkcd. Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

As an example, whenever I fill a CEA form, there are some options to choose from regarding who my data can be shared with. If I don't want my data connected with that from another EA org, or seen by my friends at some software providing org, this could be much harder to accomplish once functions are split out.

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.

This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.

I'm reminded of Scott Siskind on prediction markets - while there might be some grantmakers who I happen to trust, EA prioritisation is exceptionally hard, and I think 'have the community have as representative a say in it as they want to have' is a far better Schelling point than 'appoint a handful of gatekeepers and encourage everyone to defer to them'.

First of all, relevant xkcd.

This seems like a cheap shot. What's the equivalent of systemwide security risk in this analogy? Looking at the specific CEA form example, if you fill out a feedback form at the event, do CEA currently need to share it among their forum, community health, movement building departments? If not, then your privacy would actually increase post-split, since the minimum number of people you could usefully consent to sharing it with would have decreased.

Also, what's the analogy where you end up with an increasing number of sandboxes? The worst case scenario in that respect seems to be 'organisations realise splitting didn't help and recombine to their original state'.

Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

I agree in the sense that overhead would increase in expectation, but a) the gains might outweigh it - IMO higher fidelity comparison is worth a lot and b) it also seems like there's a <50% but plausible chance that movement-wide overhead would actually decrease, since you'd need shared services for helping establish small organisations. And that's before considering things like efficiency of services, which I'm confident would increase for the reasons I gave here.

Not going to make any recommendation about splitting vs not splitting in any practical cases, since there are many tradeoffs here,  but I think the arguments are interesting! I like the idea of smaller competitive units being more efficient in terms of finding the best fit for each role.

If you maximise for the sum of two simultaneous dice rolls, it's going to take more rolls on average to reach a sum of at least  compared to if you were allowed to roll each die separately. For the latter case, if you roll a high number on the first die, you can move on to rolling the second die. But if you have to roll both at once, you could get a high number on one and a low number on the other, so you'd have to roll both again for a chance of a higher sum. The divergence grows with the number of dice and the range of values.

The point is that if you want to maximise the sum of quality for a set of orgs, it's going to be more efficient if you have smaller competitive units (rolling two dice sequentially rather than having to roll both at once), and splitting orgs could perhaps be a way of achieving that.

Although you may also argue that it's easier for a larger org to find the best fit for an individual role due to concentration of expertise and experience. And because orgs hiring internally may have more information about what the best fit for a particular role is compared to 'wisdom of the crowd'-aggregated opinions of potential funders. Hence the trade-offs, and me being reluctant to come to a conclusion.

Another frame to put on this is that a good program needs be modular, their components shouldn't be tightly coupled unless they have to be. This way, it's easier to locate bugs, the damage is localised, less buildup of technical/design debt.

A community with a purpose is a program. Makes sense to spread out points of failure, to be safer. We also structurally learn faster: cost of failure is lower, so is easier to adapt to the new information failure teaches us, faster to iterate. Also makes sense to be modular, so it's easier to adopt new innovations without having to rewrite/refactor the whole thing from scratch.

You want to have a good ontology of what you're learning in order to be able to swap out ideas more cleanly, and to localise points of failure. For many of the same reasons, you want to have a good "ontology" of organisations in your community (or whatever the components of the community are).

Probably one of the reasons the brain has evolved to learn sparse encodings of representations, is so that there's less interdependency, more cohesion, and therefore also less buildup of technical debt.

More from Arepo
Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The