Hide table of contents

Epistemic status: I lean toward 'yes, in some cases' - but a deeper dive into the question could be a valuable though potentially large research project. In this post I’ll just provide some intuitions for why I think it could be important question to ask.

Introduction

I recently argued that EA organisations should outsource their tech work to nonprofit agencies. This got me wondering if the arguments more generally imply that EA organisations should strive to divide themselves to get as close as possible to having a single responsibility.[1]

Splitting as often as possible

Higher fidelity funding signals

Let’s imagine a hypothetical charity, Artificial Intelligence & Malaria Longtermist Exploration & Shorttermist Solutions (AIMLESS), that spends 50% of its effort on distributing bednets with world-class efficiency and 50% on performing mediocre AI research.

If we believe the EA space benefits from wisdom of the crowd, then that wisdom is purer the easier it is for EAs to donate money to specific causes or projects they believe in.[2] If AIMLESS is the best bednet-distributing organisation, it will be difficult or impossible to tell in what proportion its donors are supporting the bednets or the AI research - and which one puts off non-donors who might have supported one of the programs.

Better inter-organisational comparisons

We tend to think of the competition between for-profits as allowing an organic comparison that isn’t possible between nonprofits, but enthusiastic splitting could turn that on its head. Value-aligned nonprofits can make comparison far clearer if they split themselves enough that what they’re doing is directly comparable.

This needn’t mean they necessarily compete - multiple tech nonprofit agencies could exist in the EA space, each focusing on different cause areas, for example. But the more granular comparability would make it much easier to diagnose when one is outperforming the other, and much easier to fix the problem. The stronger org could share its practices with the weaker one to start with, and if that didn’t work, could start competing directly for funding - allowing underperforming organisations the crucially important function of ceasing to exist. Then if one went to zero funding, the other would aim to split into two as soon as possible so the ecosystem continued with minimal interruption (unless the defunct one’s issue had been that it wasn’t in a viable niche).

If one were inclined toward awful neologisms, one could call this coopetition.

More efficient allocation of time for support staff

In principle, extracting support departments from multiple organisations into a smaller number of consultancies or agencies allows the same number of staff to produce significantly more value - I showed one way one could quantify that benefit here. To give a quick summary: if we assume that the amount of valuable work for any given department fluctuates over time and that their stated goals have similar value, then combining multiple departments into a single external organisation means they can always prioritise the highest priority work from any of their dependent organisations.

These agencies could either be client-funded or donor-funded, in the latter case doing some amount of their own internal prioritisation. I discussed the advantages of each approach here. A third option would be to determine their priorities via regranting orgs who specialise in prioritising among subareas, and who distribute grants among those so as to optimise incentives - basically mimicking the operations and management of a large unitary organisation.[3]

But no oftener

If the titular hypothesis is true, it should start conversations, not end them. In software development, where the single responsibility principle is fairly uncontroversial, understanding how to apply it well is an art form. It’s often better to delay a split even when you know one should happen because you expect to learn more about how to split by waiting. So this post is a call for greater scrutiny of organisations’ focus, not for any specific chopping plan. Here’s some considerations which might go either way…

Synergies

Combining functions can allow organisations to develop internal synergies; separating functions can create overhead. An organisation could have many different departments, but if those are all in service of a coherent goal, splitting may be unwise. Sometimes a large responsibility might necessitate a large organisation.

Having said that, splitting organisations needn’t remove close cooperation or even physical proximity between their former departments - it just makes it an option, rather than a necessity. It also needn’t remove any functions. If the judgement of the executives of a multi-responsibility organisation was good enough that people wanted to continue to fund their judgement, those executives could still become, say, one or more regranting/consulting/management organisations - but one that could now be evaluated separately from the direct workers.

promotion in Starship Troopers

This kind of philosophy (embedded video link)

Setting up a single charity is a pain. Setting up multiple charities is multiple pains. This could be offset somewhat by having easy access to a specialised EA-focused legal agency, or by just not becoming charities if most of their funding would be granted by donor charities (meaning there wouldn’t be a tax incentive for charitable status).

Restricted donations as an alternative

This is possible and in practice common, but seems to lose many of the benefits. Suppose AIMLESS considered restricted donations as an alternative to splitting. The former would have at least two issues that the latter would avoid:

  • Counterfactual fungibility: if Alice gives $x restricted to their malaria program and Bob then gives $x unrestricted, it seems likely the charity will divert all the latter to its AI program, leading to the same outcome as if both donations had been unrestricted. This is less of a problem the more of its grants are restricted, but the organisation will always be incentivised to seek unrestricted grants or to seek specific ‘restrictions’ that match its intentions.
  • Logistical ambiguity: assuming AIMLESS has any support staff or other workers whose time is naturally split between both projects, even with the most scrupulous intentions it’s going to be very hard to ensure they split their attention between projects in proportion to the donation restrictions.

Classes of division

One could slice up a nonprofit at least two ways - by shared services such as tech, marketing, legal, HR, design and perhaps some types of research, and by focused services or products, such as distributing bednets. The arguments above apply to both, but the logistics of splitting them might vary predictably.

Shared services

  • We’re already familiar with the concept of shared service agencies and consultancies; we seem to be increasingly viewing them as a good thing,[4] and such services are already being set up.
  • These services are often essential - an org often cannot forego legal input, hiring or having a website.

So shared service agencies can be established before orgs split off their service departments, and would need to be - unless the org converted its department into the first such agency.

Products

  • It will rarely seem appealing to create an organisation to do work that an existing organisation already covers. The new organisation won’t be directly comparable to the existing organisation, removing the advantages of competition.
  • As long as a product-oriented org has at least one product, having other distinct products is optional.

So splitting orgs by product doesn’t create new dependencies between separate organisations, though it probably requires more initiative to do.

Real world splitting

Splitting services

Any EA org that has an ‘X-service-department’ of at least 1 person should arguably split. If we want to encourage splitting, we might also want to develop encouraging community norms, e.g. praising those who try it, or more concretely having shared service agencies prioritise newly split organisations.

Splitting products

Product-splitting is more awkward, both because it’s harder to describe without singling out specific orgs, and because it’s harder to identify when multiple products constitute a single ‘responsibility’. A good sign that you might lean towards splitting is that there’s no succinct way of describing the sum of all your products without using the word ‘and’. A good sign that you might lean away from splitting is that, if you were interviewing someone for work on Product B, one of the most practically valuable qualifications they could have is previous work on Product A and vice versa (example, further discussion). Recombining later is always possible, and perhaps less subject to status quo bias.

But if the hypothesis of this post is correct, organisations should err towards splitting when in doubt. I offer for discussion, but don’t want to commit either way on the following examples:

CEA, whose products* include

  • Community building grants
  • Group support
  • The EA forum
  • Other EA websites
  • Running events
  • Supporting EAGx events
  • Community health
  • Media training, in some capacity

80k, whose products* include

  • General careers advice (web content)
  • Bespoke careers advice (1-on-1s)
  • Careers research
  • Podcast
  • Job board All with a broad longtermist focus.

Founders Pledge, whose products* include

  • Philanthropic advisory
  • Cause area research
  • Multiple DAFs
  • Pledges
  • Events
  • Community building

Rethink Priorities, whose products* include

  • Explicitly longtermist research
  • Explicitly shorttermist research

*Some of these are more like services, but since the case for splitting services off seems both stronger and clearer than products, it’s not important to distinguish here.

Summary

To paint a clear picture, a post-split org would likely start out with mostly the same people working in the same shared office (if they had one) on the same projects, with the same product staff seeking project management from the same managers.

The immediate differences might be a) that for services for which an agency already existed, the relevant service staff would need to apply to join that agency or be retained initially as contractors, with the intention of the org ultimately moving to the agency for the service; b) there would be some setup hassle with registering new organisations etc (but resolving this would be a priority for the rest of the community).

But these new orgs would be decoupled from each other, such that over time they could try working with different orgs, each seeking the optimal relationships for their specific role.

Finally, here’s an illustration of what this future version of the effective altruism ecosystem could look like:

future of EA

Acknowledgements

Thanks to Evan Chu, Emrik Garden, Jonas Wagner, Krystal Ha, Siao Si Looi, Dony Christie, Jonas Moss, Tazik Shahjahan and Onni Arne for input on this post. Needless to say, mistakes are entirely my cat's fault.


  1. Much of the argument here maps onto the single responsibility principle in object-oriented software design. This states that each object or class should have a single concern. ↩︎

  2. This holds both ‘vertically’, as in allowing EAs to choose the level of specificity at which they feel comfortable targeting their support (eg shorttermist < global poverty < diseases < neglected tropical diseases < schistosomiasis < deworming project), and ‘horizontally’, as in allowing EAs to choose the focus at their given level of specificity (eg within global poverty, diseases=education=womens’ empowerment=mental health). ↩︎

  3. The purported benefits of the single responsibility principle are semi-intentionally analogous to those I describe here:

    • ‘Easier to Understand’ - cf section 1
    • ‘Easier to maintain’ - cf section 2(kinda)
    • ‘More reusable’ - cf section 3

    While I’m somewhat suspicious of my epistemics here, having come from a software development background and just happened to see it this way, I do think there’s some justification for this analogy. To wit, computer programs have a single goal or clearly aligned set of goals, which all of their modules are set up to serve. The same can’t be said of the for-profit space or of non-EA nonprofits, but it’s uniquely true - at least truer - of the EA space. While EAs inevitably have some value disparity, it’s generally smaller, more explicit and more compartmentalisable - as in, animal welfarists, global povertyists and longtermists are quite closely value aligned among themselves, even when not with each other. So even if the EA space as a whole isn’t value-aligned enough for such a model (and I’m not sure it isn’t at the organisational level), it’s easy to identify subsets of it which are. ↩︎

  4. ‘Agency’ vs ‘consultancy’ is a vague distinction, but I’m using the former for consistency with the rest of this post: a consultancy typically provides a superset of the services an agency, so where those services aren’t intrinsically linked, the argument of the rest of this post would apply to splitting the consultancy. ↩︎

12

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

If we believe the EA space benefits from wisdom of the crowd, then that wisdom is purer the easier it is for EAs to donate money to specific causes or projects they believe in.

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is. On the one hand this may not be a problem if one could delegate their voice/donation to others in some or all areas, or if grantmakers could handle a pool of donors, each with different preferences with different levels of granularity. On the other hand, maybe people will be too confident in their preferences (regarding altruism - so, other people's needs and preferences).

Combining functions can allow organisations to develop internal synergies; separating functions can create overhead... Having said that, splitting organisations needn’t remove close cooperation or even physical proximity between their former departments - it just makes it an option, rather than a necessity.

First of all, relevant xkcd. Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

As an example, whenever I fill a CEA form, there are some options to choose from regarding who my data can be shared with. If I don't want my data connected with that from another EA org, or seen by my friends at some software providing org, this could be much harder to accomplish once functions are split out.

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.

This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.

I'm reminded of Scott Siskind on prediction markets - while there might be some grantmakers who I happen to trust, EA prioritisation is exceptionally hard, and I think 'have the community have as representative a say in it as they want to have' is a far better Schelling point than 'appoint a handful of gatekeepers and encourage everyone to defer to them'.

First of all, relevant xkcd.

This seems like a cheap shot. What's the equivalent of systemwide security risk in this analogy? Looking at the specific CEA form example, if you fill out a feedback form at the event, do CEA currently need to share it among their forum, community health, movement building departments? If not, then your privacy would actually increase post-split, since the minimum number of people you could usefully consent to sharing it with would have decreased.

Also, what's the analogy where you end up with an increasing number of sandboxes? The worst case scenario in that respect seems to be 'organisations realise splitting didn't help and recombine to their original state'.

Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

I agree in the sense that overhead would increase in expectation, but a) the gains might outweigh it - IMO higher fidelity comparison is worth a lot and b) it also seems like there's a <50% but plausible chance that movement-wide overhead would actually decrease, since you'd need shared services for helping establish small organisations. And that's before considering things like efficiency of services, which I'm confident would increase for the reasons I gave here.

Not going to make any recommendation about splitting vs not splitting in any practical cases, since there are many tradeoffs here,  but I think the arguments are interesting! I like the idea of smaller competitive units being more efficient in terms of finding the best fit for each role.

If you maximise for the sum of two simultaneous dice rolls, it's going to take more rolls on average to reach a sum of at least  compared to if you were allowed to roll each die separately. For the latter case, if you roll a high number on the first die, you can move on to rolling the second die. But if you have to roll both at once, you could get a high number on one and a low number on the other, so you'd have to roll both again for a chance of a higher sum. The divergence grows with the number of dice and the range of values.

The point is that if you want to maximise the sum of quality for a set of orgs, it's going to be more efficient if you have smaller competitive units (rolling two dice sequentially rather than having to roll both at once), and splitting orgs could perhaps be a way of achieving that.

Although you may also argue that it's easier for a larger org to find the best fit for an individual role due to concentration of expertise and experience. And because orgs hiring internally may have more information about what the best fit for a particular role is compared to 'wisdom of the crowd'-aggregated opinions of potential funders. Hence the trade-offs, and me being reluctant to come to a conclusion.

Another frame to put on this is that a good program needs be modular, their components shouldn't be tightly coupled unless they have to be. This way, it's easier to locate bugs, the damage is localised, less buildup of technical/design debt.

A community with a purpose is a program. Makes sense to spread out points of failure, to be safer. We also structurally learn faster: cost of failure is lower, so is easier to adapt to the new information failure teaches us, faster to iterate. Also makes sense to be modular, so it's easier to adopt new innovations without having to rewrite/refactor the whole thing from scratch.

You want to have a good ontology of what you're learning in order to be able to swap out ideas more cleanly, and to localise points of failure. For many of the same reasons, you want to have a good "ontology" of organisations in your community (or whatever the components of the community are).

Probably one of the reasons the brain has evolved to learn sparse encodings of representations, is so that there's less interdependency, more cohesion, and therefore also less buildup of technical debt.

More from Arepo
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T