This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I’m not speaking on behalf of any organisation I’m involved with. For some context on how I’m now thinking about talking in public, I’ve made a shortform post here. Thanks to the many people who provided comments on a draft of this post.
Intro and Overview
How does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised?
These are the questions I’m going to address in this post. In what follows, I’ll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.
My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.
It’s hard to know whether the right response to this is to become more centralised or less. In this post, I’m mainly hoping just to start a discussion of this issue, as it’s one that impacts a wide number of decisions in EA. [1] At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now.
But centralisation isn’t a single spectrum, and we can break it down into sub-components. I’ll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:
Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:
- Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie’s post, which he wrote independently of this one.)
- We should, insofar as we can, cultivate a diversity of EA-associated public figures.
- [Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).
- [Maybe] CEA could be renamed. (This is suggested by Kaleem here.)
Funding: It’s hard to fix, but it would be great to have a greater diversity of funding sources. That means:
- Recruiting more large donors.
- Some significant donor or donors start a regranters program.
- More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people’s decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.
Decision-making:
- Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.
- [Maybe] CEA could partly dissolve into sub-projects.
Culture:
- We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles, and celebrate cases where people pursue heterodox paths (as long as their actions are clearly non-harmful).
Here are some ways in which I think EA could, ideally, become more centralised (though these ideas crucially depend on someone taking them on and making them happen):
Information flow:
- Someone could create a guide to what EA is, in practice: all the different projects, and the roles they fill, and how they relate to one another.
- Someone could create something like an intra-EA magazine, providing the latest updates and featuring interviews with core EAs.
- Someone could take on a project of consolidating the best EA content and ideas, for example into a quarterly journal.
Provision of other services that benefit the EA ecosystem as a whole:
- Someone could set up an organisation or a team that’s explicitly taking on the task of assessing, monitoring and mitigating ways in which EA faces major risks, and could thereby fail to provide value to the world, or even cause harm.
- Someone could set up a leadership fast-track program.
And here are a couple of ways in which things are already highly decentralised, and in my view shouldn’t change:
Ownership:
- No-one owns “EA” as a brand, or its core ideas.
Group membership:
- Anyone can call themselves a part of the EA movement.
Thinking through the issue of decentralisation has also led me to plan to make some changes to how I operate in a decentralised direction:
Decision-making:
- I plan to step down from the board of Effective Ventures UK once we have more capacity.
Perception:
- I plan to go further to distance myself from the idea that I’m “the face” of EA, or a spokesperson for all of EA. (This hasn’t been how I’ve ever seen myself, but is how I’m sometimes perceived.)
In a being-helpful-where-I-can way (rather than “taking-ownership-for-this-thing” way), I’m also spending some time trying to bring in new donors, and help support other potential public figures. I’m not doing anything, for now, in the direction of further centralisation.
A final caveat I’ll make on all the above is that this is how I see things for now. The question of centralisation is super hard, and what makes sense will change depending on the circumstances of the time. Early EA (prior to ~2015) was notably less centralised than it was after that point, and I think that at that time increased centralisation was a good thing. In the future, I’m sure there’ll be further changes that will make sense, too, in both decentralised and centralised directions.
The rest of this post is structured as follows:
- First, I give an overview of how decision-making currently works in EA, as it seems to me.
- Second, I give a high-level discussion of the question of where EA should be on the centralisation spectrum, making comparisons with other movements or groups.
- Finally, I get into specifics of things that could or should change.
How decision-making works in EA
A number of people have commented on the Forum that they don’t feel they understand how decision-making works in EA, and I’ve sometimes seen misinformation floating around; this confusion is often about how centralised EA is.
So I’m going to try to clarify things a bit. It’s tough to describe the situation exactly, because the reality is a middle ground between a highly centralised decision-making entity like a company and complete anarchy. And where exactly EA lies between those two extremes often depends on what exactly we’re talking about.
Anyway, here goes. Some ways in which the EA movement is centralised:
- A single funder (Open Philanthropy, “OP”) allocates the large majority (around 70%[2]) of funding that goes to EA movement-building. If you want to do an EA movement-building project with a large budget ($1m/yr or more), you probably need funding from OP, for the time being at least. Vaidehi Agarwalla’s outstandingly helpful recent post gives more information.
- Effective Ventures US and UK (“EV”) currently house the majority of EA movement-building work.
- The senior figures in EA are in fairly regular communication with each other (though there’s probably less UK<>US communication than there should be).
- It’s not totally determinate who is a “senior figure”, and it varies over time, but the current list of people would at least include: Nick Beckstead, Alexander Berger, Max Dalton, Holden Karnofsky, Howie Lempel, Brenton Mayer, Tasha McCauley, Toby Ord, Lincoln Quirk, Nicole Ross, Eli Rose, Zach Robinson, James Snowden, Ben Todd, Ben West, Claire Zabel, and me. All of these people have had or currently have positions at OP or senior positions at EV.
- Usually, there’s an annual meeting, the Coordination Forum (formerly called the “Leaders’ Forum”), usually of around 30 people, which is run by CEA largely as an un-conference, for senior or core people. This year, there hasn’t been an equivalent so far, but there will probably be one later in the year.
- Normally, before someone embarks on a major project, they get feedback from a wide variety of people on the project, and there’s a culture of not taking “unilateralist” action if most other people think that the project is harmful, even if it seems good to the person considering it. (Ideally, in a binary choice and given a number of assumptions, one pursues the action if it’s positive expected value on the median estimate of the action’s expected value among the people assessing it. It’s debatable the extent to which this rule is followed in practice in EA, or the extent to which the simple models in that paper are good guides to reality.)
Some ways in which EA is decentralised:
- There’s no one, and no organisation, who conceives of themselves as taking ownership of EA, or as being responsible for EA as a whole.
- CEA doesn’t see itself in this way. For example, here it says, “We do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work."
- EV doesn’t see itself in this way, and it includes projects that don’t consider themselves to be part of the EA movement or engaged in EA movement-building (such as Centre for the Governance of AI, Longview Philanthropy, and Wytham Abbey.)
- The partial exception to this is CEA’s community health team, on issues of misconduct in the community, though even there they are well aware of the limited amount of control they have.
- There is no trademark on “effective altruism.” Anyone can start a project that has “effective altruism” in the name.
- There’s no requirement for EA organisations to be affiliated with Effective Ventures, and many aren’t, such as Rethink Priorities, the Global Challenges Project and some country-level organisations such as Effective Altruism UK.
- There are a number of distinct core EA projects (CEA, 80,000 Hours, Giving What We Can, Rethink Priorities, Global Challenges Project, etc.) that make independent strategic plans.
- There’s no CEO or “leadership team” of EA. There aren’t any formal roles that would be equivalent to C-level executives at a company. It’s vague who counts as a “senior EA”.
- Across Effective Ventures US and UK, for example, in practice decision-making is currently shared between two boards, two CEOs, and the CEO or Executive Director of every project within the legal entities (e.g. CEA, 80,000 Hours, Giving What We Can, EA Funds, Centre for the Governance of AI, etc), who develop their projects’ annual plans and strategy, including making many of the most important decisions relevant to the movement as a whole (e.g. how to do marketing, and which target audience to have).
- There are a number of what in absolute terms are major donors, as well as a diversity of funding opportunities from places like EA Funds and the Survival and Flourishing Fund. They are generally very keen to fund things that they think OP is overlooking.
- Generally, I find there’s a very positive attitude among senior EAs for competition within the EA ecosystem.
- The Global Challenges Project is illustrative. Emma Abele and James Aung thought that CEA was doing a suboptimal job with (some) student groups. So they set up their own project, got funding from Open Philanthropy, and did a great job.
- Similarly, Probably Good was set up as being (in some ways) a competitor to 80,000 Hours, because the founders thought that 80,000 Hours was lacking in some important ways; it has received support from Open Philanthropy and encouragement from 80,000 Hours.
In general, coordination is pretty organic and informal, and happens in one of two ways:
- People or organisations come up with plans, proactively get feedback on their plans, get told the ways in which their plans are good or bad, and they revise them.
- Someone (or some people) have an idea that they think should exist in the world, and then shop it around to see if someone wants to take it on.
Overall, the best analogy I can think of is that EA functions as a “do-ocracy”. Here is a short article on do-ocracy, which is well worth reading. A slogan to define do-ocracy, which I like, is: “If you want something done, do it, but remember to be excellent to each other when doing so.” (Where, within EA, the ‘be excellent’ caveat covers non-unilateralism and taking externalities across the movement seriously.) I think this both represents how EA actually works, and how most senior EAs understand it as working.
I think that the main way EA departs from being a do-ocracy is that many people might not perceive it that way (very naturally - because it hasn’t yet been publicly defined that way); there’s a culture where sometimes people feel afraid of unilateralism, even in cases where that fear doesn’t make sense. If that’s true, it means that some people don’t do things because they feel they aren’t “allowed” to, or perhaps because they think that someone else has responsibility, or has figured it all out.
Compared to a highly-centralised entity like a company, the semi-decentralised / do-ocracy nature of EA has a few important upshots. This is the part of the post I feel most nervous about writing, because I’m worried that others will interpret this as me (and other “EA leaders”) disavowing responsibility; I’m already anxiously visualising criticism on this basis. But it seems both important and true to me, so I still want to convey it. The upshots are:
- If something bad happens, it’s natural to look for who is formally responsible for the problem. (And, in a company, there’s always someone who is ultimately formally responsible: responsibility bottoms out with the CEO). But, often, the answer is that there’s no one who was formally responsible, and no one who was formally responsible for making sure that someone was formally responsible.
- It’s difficult for calls along the lines of, “Something should be done about X”, or “EA should do Y” to have traction, unless the call to action is targeted at some particular person or project, because there’s no one who’s ultimately in charge of EA, and who is responsible for generally making the whole thing go well. (See Lizka Vaintrob’s excellent post on this here).
- The reason for something happening or not happening is often less deep than one might expect, boiling down to “someone tried to make it happen” or “no one tried to make it happen”, rather than “this was the result of some carefully considered overarching strategy”. Moreover, the list of things it would be good to do is very long, and the bottleneck is normally there being someone with the desire, ability and spare capacity to take it on.
- Thoughts of “I’m sure this is the way it is because some more well-informed people have figured it out” are often incorrect, especially about things that aren’t happening.
I get the sense that the above points mark a major difference in how many people who work for core EA orgs see decision-making in EA working, and how it's perceived by some in the wider community. I have some speculative hypotheses about why there’s this discrepancy, but it’s a big digression so I’ve put it into a footnote. [3]
When thinking about how centralised or not EA is, or should be, it can be helpful to have in mind concrete potential analogies, and the strengths and weaknesses they have. Here’s a spectrum of organisations, in descending order from more to less centralised (as it seems to me):
- communist dictatorships (e.g. North Korea)
- the US army
- most companies (e.g. Apple)
- highly centralised religious groups (e.g. Mormonism)
- franchises (e.g. McDonald’s)
- the Scouts
- mixed economies (the US, UK)
- registered clubs and sports groups (e.g. The United States Golf Association; USA Basketball)
- intergovernmental decision-making
- fairly decentralised religious groups (e.g. Protestantism, Buddhism)
- most social movements (e.g. British Abolitionism, the American Civil Rights Movement)
- the scientific community
- most intellectual movements (e.g. behaviourism)
- the US startup scene
This is highly subjective, but it seems to me the overall level of centralisation within EA is currently similar to fairly decentralised religious groups, and many social movements.
It can also be helpful to break down “centralisation” into sub-dimensions, such as:
- Decision-making power: To what extent is what the group as a whole does determined by a small group of decision-makers?
- Are these decision-making structures formal or informal?
- Do these decision-makers have control over resources, including financial resources?
- Who is accountable for success or failure? Are these accountability mechanisms formal or informal?
- Ownership: Is there legal ownership of constitutive aspects of the group (e.g. intellectual property, branding)?
- Group membership: How strong is the ability to determine membership in the group: How hard is it for someone in the group to leave? How hard is it for someone outside of the group to enter? And how tightly-defined is group membership?
- Are there formal mechanisms for doing this, or merely informal?
- Information flow: To what extent does information flow merely from decision-makers down to other group members, and to what extent does it flow back up to decision-makers, or horizontally from one non-decision-maker to another?
- Culture: Do people within the group feel empowered to think and act autonomously, or do they feel they ought to defer to the views of high-status individuals within the group, or to the majority view within the group? [4]
On these dimensions, it seems to me that EA is currently fairly decentralised on group membership and information flow, very decentralised on ownership, and in the middle on decision-making power [5] and culture.
Should EA be more or less centralised?
At the moment, it seems to me we’re in the worst of both worlds, where many people think that EA is highly centralised, whereas really it’s in-between. We get the downsides of appearing (to some) like one entity without the benefits of tight coordination. For many issues, there’s a risk that people generally feel that the “central” groups and people will be in charge of all issues impacting EA and so there’s no need to do anything about any gaps they perceive, even when that’s not the case.
I’ll talk more about specific ways EA could centralise or decentralise in the next section. If we were going broadly in the direction of further centralisation, then, for example, CEA could explicitly consider itself as governing the community, and explicitly take on more roles. Going further in that direction, there could even be a membership system for being part of EA, like the Scouts has. If we were going broadly in the direction of further decentralisation, then CEA could change its name and perhaps separate into several distinct projects, some more projects could spin out of Effective Ventures, and we could all more loudly communicate that EA is a decentralised movement and cultivate a decentralised culture.
I’ll give the broad case both for and against further centralisation or decentralisation, and then get into specifics.
The broad case for further centralisation includes (in no particular order):
- There are some issues or activities that concern the community as a whole, or where there are major positive / negative externalities, or natural monopolies. These include:
- The handling of bad actors within EA, who can cause harm to the whole of the movement.
- Infohazards (e.g. around bio x-risk).
- Issues that impact on EA’s brand. For example, whether to associate with a very public new donor, or whether to run a public EA campaign.
- Given the ubiquity of fat-tailed distributions, semi-centralisation is almost inevitable. Wealth is heavily fat-tailed, so it’s very likely that one or a small number of funders end up accounting for most funding. [6] Similarly, fame (measured by things like number of social media followers, media mentions, or books sold) also seems to be fat-tailed, so it’s likely that one or a small number of people will end up accounting for most of the attention that goes towards specific people. We can try to combat this, but we’ll be fighting against strong forces in the other direction.
- The nonprofit world is very unlike a marketplace. Crucially, there isn’t a price mechanism which can aggregate decentralised information and indicate how the provision of goods and services should be prioritised and thereby incentivise the production of goods and services that are most needed. [7] So common arguments within economics that, under some conditions, favour something like market competition, don’t cleanly port over. [8]
- Centralisation can enable greater control over the movement in potentially-desirable ways. (Somewhat analogously, governments can help control an economy by printing money, setting interest rates, and so on.)
- For example, as movements grow, there’s a risk that their ideals become diluted over time, regressing to the mean of wider society’s views. Centralisation can be a way of preventing or slowing that tendency; perhaps the ideal growth rate for EA is faster or slower than the “organic” growth rate.
- In the absence of coordination, some projects might get started, or continue, for “unilateralist’s curse” type reasons: naturally, there will be a range of assessments of how good a potential or existing project is, and in the absence of coordination (or at least information-sharing), those who think the project is best will go ahead with it, even if it’s overall a bad idea.
- Centralisation can help enforce quality control, preventing low-integrity or low-quality projects from damaging the wider public’s perception of EA. [9]
- Decentralisation risks redundancy, with multiple people working on very similar projects. Centralisation gets benefits from economies of scale — there are certain things you only need to do or figure out once (e.g. setting up a legal entity, having accounting, legal, HR departments (etc)).
- No matter how the EA movement is structured, onlookers will often treat it as a single entity, interpreting actions from any one person or organisation as representative of the whole.
- It seems harder for a decentralised movement to centralise than it is for a centralised movement to decentralise. So, trying to be as centralised as possible at the moment preserves option value.
The broad case for further decentralisation includes (in no particular order):
- People in EA are doing a wide variety of things, and it’s hard for one organisation to speak to and satisfy all the different sub-cultures within EA at once. There are very different needs and interests from, for example, student activists, academics, people working in national security, old-time rationalists, major philanthropists, etc, and among people working in different cause areas.
- Relatedly, decentralised decision-making benefits from local knowledge. The way EA should be thought about or communicated across causes and countries will be very different; decisions about how EA should be adapted to those contexts are probably best done by people with the most knowledge about those contexts.
- Even if the nonprofit world is significantly unlike a for-profit marketplace, there are still good arguments for thinking that competition can be highly beneficial, resulting in better organisations and products. This is both because (i) competition means that people can choose the better service; (ii) competition incentivises better service provision among the competitors. In contrast, centrally-planned groups are often slow-moving, bureaucratic, and ineffective.
- Any centralised entity would be very unlike a government. It couldn’t forcibly tax its members, or enforce its policies through its own legal system. So common arguments within economics and political science that, under some conditions, favour something like government action, don’t cleanly port over.
- Most activities within EA don’t concern the community as a whole, or have major positive / negative externalities, or natural monopolies.
- Centralisation can be less empowering. Suppose that there’s some activity X that would be well worth doing, and benefit all of EA, but the central entities haven’t done it (for bad reasons). Then, if the widespread understanding is that the movement is centralised, X just won’t happen: other parties will believe that the central entities have got it covered.
- Centralisation is more fragile in some ways. If, for example, there was only one EA organisation, then the collapse of that one organisation would mean the collapse of EA as a whole.
- There’s a risk that EA ossifies in thought, becoming locked-in to a certain set of founding beliefs or focuses. In particular, if there’s a set of early highly influential thinkers, and the views of those early thinkers become the default such that it’s much harder for the movement as a whole to reason away from those views, then, in the likely event that those early thinkers are mistaken in some important ways, that would be very bad. This risk could be especially likely if people who aren’t sympathetic to those particular beliefs are more likely to bounce off the movement, so the movement becomes disproportionately populated with people sympathetic to those beliefs. Centralisation might increase this risk.
- This seems to happen in science. Max Planck famously quipped that science advances “one funeral at a time” and some recent evidence (which I haven’t vetted) suggests that’s correct.[10]
- And it often seems to happen in other social and intellectual movements, too.[11]
- The tractability of further centralisation seems low. This is for a few reasons:
- If there’s some central grand plan for how EA should be, if some people disagree with that plan, there’s not really much in the way of enforcement that a central body can do. At the moment, people can’t get fired or kicked out of EA: they can get disinvited to EA events, not-funded by groups that agree, removed from the EA Forum, and information about them being a bad actor can be percolated, but that’s not necessarily enough to prevent them continuing. And these actions would seem harsh as a response to someone simply disagreeing with a strategic plan. Ultimately, if some person, organisation or group wants to do something and call it EA, they just can. This means that centralisation efforts risk being toothless.
- One could try to change this, for example by having a “membership” system like many political parties have and some advocacy groups (e.g. the Sierra Club, or the NAACP) have. But I think that, even if that seemed desirable, trying to implement that seems extremely hard.
- It’s hard to see who would lead a centralisation effort. They’d need to have a combination of ability, desire and legitimacy within the movement, without it also being the case that it’s more important for them to work on something else.
Of these, the biggest considerations in favour of centralisation, in my view, are option value and the handling of bad actors. The biggest considerations in favour of decentralisation are worries about ossification and lock-in, the benefits of competition, and, above all, that I think the tractability of further centralisation seems low.
As I mentioned at the outset, there’s not a single spectrum of centralisation to decentralisation, and I’ll get into specifics in the next section. Overall, I think the arguments on average broadly tend towards further decentralisation rather than centralisation. But I’m still very unsure: there are tough tradeoffs here. If centralised, you get fewer bad projects but fewer good projects, too; you get less redundancy but less innovation. So, even though I’m broadly in favour of further decentralisation, if there was, for example, a new Executive Director of CEA or someone at Open Philanthropy who really wanted to take the mantle on, and could build the legitimacy needed to pull it off, I’d be interested to see them experiment with centralisation in some areas and see how that goes.
Going back to the list of comparisons: I feel like the level of decentralisation in the scientific community or intellectual movements are in the vein of what we should aim for. The analogy I like best, at the moment, is with specific scientific/academic communities. I know most about the analytic philosophy community. Here are some notable aspects of that community, where I think the analogy is helpful (feel free to skip the sub-bullets if you aren’t interested in the details. I’m also not claiming that we should emulate the analytic philosophy community, just that it’s an interesting analogue in terms of level of (de)centralisation):
- Centralised bodies tend to take the form of provision of services rather than top-down control. They tend to arise because some person or group has unilaterally offered them and they’ve had widespread adoption. Often, there are different groups offering the same services.
- The closest thing to a centralised body in analytic philosophy is The American Philosophical Association. What they do is limited, though, and as a philosopher you rarely interact with them or think about them; they aren’t a very powerful force within the field of analytic philosophy.
- It runs what I believe are the three largest philosophy conferences. First-round interviews for US tenure-track philosophy jobs are usually held at one of these conferences.
- It provides some grants, fellowships, and funds.
- It provides some online resources, too, although they don’t seem very influential.
- I think it used to host adverts for jobs in philosophy, but then PhilJobs did the same thing but better so they now use PhilJobs.
- Some other examples of “centralised” services in philosophy:
- Journals. Nowadays, their key role is to act as quality-stamps on philosophical output. The prestige of different journals is generally well-known, and publication in a particular journal is understood as a way of (i) indicating to other philosophers that this piece of work might be worth looking at; (ii) providing evidence of the quality of a philosopher’s work for hiring committees and tenure committees.
- Different journals are run by different groups, traditionally by universities or publishers. More recently, Philosophers’ Imprint was founded by two philosophers who thought they could create an online and open-access journal that was better-run than existing journals, and it’s been very successful.
- The Philosophical Gourmet Report ranks graduate programs in philosophy, by surveying leading philosophers on the impressions of quality of faculty at the different departments. It’s very influential. It was originally created single-handedly by one philosopher, Brian Leiter.
- It has some competitors, such as the Pluralist’s Guide to Philosophy.
- The Stanford Encyclopedia of Philosophy, which functions as the go-to textbook within philosophy.
- Two philosophers, David Bourget and David Chalmers, created a range of services. Philjobs is a job board for philosophy positions. PhilPapers is an index and bibliography of philosophy, and also runs a survey of philosophers’ beliefs. PhilEvents is a calendar of conferences and workshops.
- Various surveys of journal rankings.
- DailyNous and Leiter Reports, two blogs which aggregate news in the philosophical world.
- Journals. Nowadays, their key role is to act as quality-stamps on philosophical output. The prestige of different journals is generally well-known, and publication in a particular journal is understood as a way of (i) indicating to other philosophers that this piece of work might be worth looking at; (ii) providing evidence of the quality of a philosopher’s work for hiring committees and tenure committees.
- Some fields have some limited amount of top-down control.
- For example, the American Psychiatric Association defines key terms in the Diagnostic and Statistical Manual of Mental Disorders, which are widely accepted. I think it would be great if EA had some key defined terms like this. (I think this to an ever greater extent with AI safety.)
- The climate physics and climate economics communities have the Intergovernmental Panel on Climate Change, which attempts to represent consensus views within these fields. I don’t see an obvious plausible analogue within EA. Something similar but massively toned-down, like an encyclopedia, could be very helpful.
- The closest thing to a centralised body in analytic philosophy is The American Philosophical Association. What they do is limited, though, and as a philosopher you rarely interact with them or think about them; they aren’t a very powerful force within the field of analytic philosophy.
- Change in what philosophers work on, or how they operate, generally happens organically, as a result of many individuals’ decisions about what is important or how philosophy should be done.
- There is sometimes explicit commentary on how philosophy should be done or what it should focus on, but when that’s influential, it’s usually because arguments have been made by people with a long established track record of excellent work. (For example, this from John Broome or this from Timothy Williamson.)
- There’s an enormous amount of internal disagreement among philosophers. Analytic philosophy is defined much more by a methodology (clear, rigorous argument), a set of defining questions (free will, the nature of morality, etc), and an intellectual tradition, than by any particular set of views.
- I think this is true in other areas of science, too, although the amount of disagreement is usually lower, and sometimes we really just know things and there’s not really a way to be a good scientist on the topic while having heterodox beliefs (e.g. believing in telekinesis, or that the Earth is only 6000 years old). I think the amount of agreement that should be expected within effective altruism should be closer to that within philosophy rather than within physics (which has a much larger body of very-high-confidence knowledge).
- There aren’t strict membership conditions for being a philosopher. (For example, you don’t need to be employed by a University.)
- Membership criteria exist in other fields, though, like medicine. Medicine also provides a nice distinction between being a researcher and being a clinician or practitioner, which ports over to effective altruism, too.
I’m not claiming that EA should exactly mirror the analytic philosophy community. And it would be a suspicious coincidence if it were the best model! I’m using it as an example for calibration — a concrete analogy of the level of centralisation we might want. In particular, reflection on it makes vivid to me the extent to which we can have community-wide services without centralisation, as a result of individuals noticing that some service isn’t being provided and setting something up to provide it.
On this broad view, what EA should aspire to be is not a club, a social movement, an identity, or an alliance of specific causes. And it should only be a community or a professional network in a broad sense. Instead, it should aspire to be more like a field — like the fields of philosophy, or medicine, or economics. [12]
Getting more specific
Given all the above, what are some more specific upshots? Here are some tentative suggestions.
First, there are some moves in the direction of decentralisation that seem very robustly good, and many of which are happening anyway:
Perception:
- Reflect reality on how centralised we are.
- Inaccurate perceptions on this seem like all downside to me.
- Assuming I’m right that, currently, perception doesn’t match reality, it means the core projects and people in EA should communicate more about what they are and are not taking responsibility for.
- This post is trying to help with that!
- But more generally, now that EA is the size it is, I suspect it means that core projects and people will need to communicate some basic things about themselves many, many times, even though it’ll feel very repetitive to them.
- Encourage a broader range of EA-affiliated public figures
- I’d love there to be a greater diversity of people who are well-known as EA-advocates, reflecting the intellectual, demographic and cultural diversity within the movement.
Funding:
- Get more major donors.
- This would be a very clear win, though it’s hard to achieve.
- There are a handful of EA-aligned potential donors who might possibly become significant donors over the next few years. But there’s no one who I expect to be as major, in particular within EA movement-building, as OP.
- Restart a regranters program
- This would have to be done by OP or some other major donor; it would give more power over funding decisions to more people.
- More people donate more or earn to give
- One way this plays out is that, because OP aims to limit the amount they contribute to most organisations and in some cases has imposed limits on how much of the budget they want to support, funders donating to those organisations in effect can "reallocate" Open Phil funding towards those orgs.
- Of course, increasing funding diversity is only one consideration among very many when making career decisions!
Decision-making:
- Some projects should spin out from EV
- Especially as projects grow in size, I think this makes sense from their perspective: it allows the projects to have greater autonomy. And it’ll have benefits across the EA movement, too.
- The various projects under EV have been thinking this through, and weighing the costs and benefits. My guess is that around half will ultimately spin out over the next year or two. If this happens, it seems like a positive development to me.
Culture:
- Celebrate diversity
- We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles and celebrate cases where people pursue heterodox paths, as long as their actions are clearly non-harmful. This can be tough to do, because it means praising someone for taking what, in your view, is the wrong (in the sense of suboptimal) decision.
Then there are some steps I can personally take in the direction of decentralisation and that seem like clear wins to me. I plan to:
- Step down from the board of Effective Ventures UK once we have more capacity. (I’m not currently sure on timelines for that. I’ll note I’ve also been recused from all decision-making relating to EV’s response to the FTX collapse.) I’ve been in the role for 11 years, and now feels like a natural time to move on. I think there are a lot of people who could do this role well, and me stepping down gives an opportunity for someone else to step up.
- I think that this will move EA in a decentralised direction on the dimension of both perception and decision-making power.
- Distance myself from the idea that I’m “the” face of EA. I’ve never thought of myself this way (let alone as “the leader”) and there have always been many high-profile EA advocates. But others, especially in the media, have sometimes portrayed or viewed me in this way. Trying to correct this will hopefully be a step in the direction of decentralisation on the perception and culture dimensions.
- Implementing this in practice will be tricky: in particular, if a journalist is writing about me, they are incentivised to play up my importance to make their story or interview seem more interesting. But I’ll take the opportunities I can to make it explicit to people that I’m talking with. I’m going to avoid giving opening / closing talks at EAGs for the time being. I’m also going to try to provide even more support to other EA and EA-aligned public figures, and have spent a fair amount of time on that this year so far.
- Prior to the WWOTF launch, I don’t think I’d appreciated the extent to which people saw me as “the” spokesperson, and then the magnitude of coverage around WWOTF made that issue more severe.
- I think that this will be healthier for me, healthier for the movement, and more accurate, too. It doesn’t make sense for there to be a single spokesperson for EA, because EA is not a monolith, and there’s a huge diversity of views within the movement. If you want to read more discussion, I wrote a draft blog post, which I probably won’t publish beyond this, somewhat jokingly titled “Will MacAskill should not be the face of EA” (here), which explains some more of my thinking. [13]
There are some other changes in EA that would move in a decentralised direction that seem plausible to me, but where it’s less obvious, would need a lot more thought, and/or the decision should be made by the head of the relevant organisation. In particular, often the decisions are clearly something that needs to wait for CEA’s next Executive Director. For example:
- Rename CEA
- The key argument here is that having an organisation called “Centre for Effective Altruism” suggests more top-down control than there is.
- Rename the EA Forum [14]
- At worst, the current name means that some people can (deliberately or unintentionally) claim that some post on the Forum “represents EA”.
- But more generally, the name also suggests that the content on the Forum is more representative of EA than it really is. Whereas really the content on the Forum will form a biased sample of thought in a whole bunch of ways: it’ll heavily overrepresent people who are Extremely Online or who have strong views, and it’ll also just introduce randomness, as it’s pretty stochastic what topics happen to get written about at any particular time.
- I’m also struggling to think of real benefits for it to have EA in the name. If it does get renamed, I want to make a semi-serious pitch for it to be called “MoreGood”.
- Dissolve CEA into sub-projects
- CEA does a lot of different things and it’s not super obvious why they should all operate within the same project.
- Previously, EA Funds spun out from CEA, and that move has seemed pretty successful. Another more complicated example is Giving What We Can, which was separate, then merged with CEA, then separated again.
In the direction of greater centralisation, the things I find myself most excited about are projects that offer services to the wider movement (rather than trying to control the wider movement). These needn’t all be in one organisation, and there are some good reasons for thinking they could be in separate projects, or just run on the side by people. Here are some ideas:
- A guide to what the EA movement is, answer lots of frequently-asked questions. (Analogy: guides to festivals.)
- An organisation devoted to assessing, monitoring and reducing major risks to EA — ways in which EA could lose out on most of its value.
- An EA leadership fast-track program, providing mentorship and opportunities to people who could plausibly enter senior positions at EA or EA-adjacent organisations in the future.
- An EA journal or magazine that has an issue every three months for very high-quality content about EA or issues relevant to EA.
- (At the moment, I feel the Forum system and blog culture incentivises large quantities of lower-quality content, rather than essays that have been worked on more intensively and iterated over the course of months).
- An organisation that’s squarely and wholly focused on applied cause prioritisation research, with a particular eye to ways that EA might currently be misallocating time or money.
- (Given the nature of EA as a project, it’s remarkable to me how little applied cause prioritisation research is done, in particular compared to how much outreach is done.)
- An ongoing survey of the movement to gauge what other things should be on the above list.
Conclusion
This post has covered a lot of ground. I hope that, at least, the overview of how I see decision-making in EA actually working has been helpful. I’ve offered my thoughts about how decision-making in EA should evolve, but I’ll emphasise again that this issue is really tough: I’m confident I’ll have made errors, missed out important considerations, and I’m not at all confident that the upshots I’ve suggested are correct. But I think it’s an important conversation, at least, to have.
- ^
I also want to emphasise that this post is just the product of some conversations and thinking; it’s not the output of some long research process. I’m sure that there’s a ton more than people with relevant experience, or domain experts on institutional design or evidence-based management could add, and could correct me on.
- ^
This figure is approximate, from here. I looked at the “total funding 2012-2023 by known sources” chart, but subtracted out Future Fund funding, which isn’t relevant for the current state of play.
- ^
A simple explanation for the discrepancy is just: People in core EA haven’t clearly explained, before, how decision-making in EA works. In the past (e.g. prior to 2020), EA was small enough that everyone could pick this sort of thing up just through organic in-person interaction. But then EA grew a lot over 2020-2021, and the COVID-19 pandemic meant that there was a lot less in-person interaction to help information flow. So the people arriving during this time, and during 2022, are having to guess at how things operate; in doing so, it’s natural to think of EA as being more akin to a company than it is, or at least for there to be more overarching strategic planning than there is. If this is right, then, happily, repeated online communication might help address this.
A second, more complex and philosophical, explanation, which has at least some relevance to some aspects of the puzzle, needs us to distinguish between different senses of responsibility:1. Formal responsibility: You’re formally responsible for X if you’ve signed up to X.
2. Interaction responsibility: You’re interaction-responsible for X if you’ve interacted with X in some way.
3. Negative responsibility: You’re negatively responsible for X if you could alter X with your actions.To illustrate: You’re formally responsible for saving a child drowning in a shallow pond if you’re a lifeguard at the pond, or if you’ve waded in and said “I’ve got it covered”. You’re interaction-responsible for the child if you waded in and tried to start helping the child. You’re negatively responsible for the child simply if you could help the child in some way — for example, if you could wade in and make things better — even if a lifeguard is looking on, and even if others have already waded in and tried to help.
(There are other generators of responsibility, too. There’s what we could call moral responsibility, for example if you deliberately pushed the child into the pond. Or causal responsibility, for example if you accidentally knocked the child into the pond. These are important, but not as relevant for the main issue I’m identifying.)I think that many EAs, especially core EAs, are likely to take both formal and negative responsibility unusually seriously. EAs tend to be very scrupulous about promises, which means they take formal responsibility particularly seriously. They also don’t place much weight on the acts/omissions distinction, which means they take negative responsibility particularly seriously.
This alone squeezes out “interaction” responsibility: if you place more weight on formal and negative responsibility, that means you have to place less weight on interaction-responsibility. But I think many EAs are also less likely to see interaction-responsibility as generating special obligations in and of itself, in the way that many in the wider world do. This is discussed at length in a couple of insightful and important posts, The Copenhagen Interpretation of Ethics by Jai and Asymmetry of Justice by Zvi Moshowitz.
A final hypothesis concerns a notion of responsibility that’s in between formal and interaction responsibility, let’s call it blocking-responsibility. You’re blocking-responsible for X if, in virtue of trying to help with X, you’ve prevented or made it much harder for anyone else to help with X, and other people would be helping with X if you weren’t trying to help with X.For example, if you wade in and help the child, but in doing so prevent other people from helping the child, and other people would help the child if you didn’t, that generates something much more like formal responsibility than interaction-responsibility.
It’s plausible to me that, often, onlookers perceive some organisation or person as signing up to “own” an issue (formal responsibility) or preventing others from helping on that issue (blocking-responsibility), when the organisation or person just sees themselves as trying to help, where the alternative is that no one helps (so they think they are interaction-responsible but not blocking-responsible).
On either of the last two hypotheses, we end up with a dynamic where:
1. Person Y helps with X, does an ok job.
2. Onlooker is critical and annoyed, like "Why aren't you doing X better in such-and-such a way?"
3. Person Y is like, "Man, I'm just trying to do my best here; you're giving me responsibilities that I never signed up for. The alternative is that to one does anything on X, and these criticisms are making that alternative more likely."Onlooker feels either like they are trying to help, or that they are simply holding accountable people who’ve adopted positions of power. Person Y feels like not only have they taken on a cost in trying to help with X, but now they’re getting criticised for it, too!
That’s all been pretty abstract, and I’ve been staying abstract because any particular instance will throw up a lot of additional issues. But I feel this dynamic comes up all the time, especially for things around “running the community”, and it doesn’t get called out because Person Y doesn’t want to appear defensive.
I’m really worried about this dynamic: if we don’t address it, it means that Onlooker is unhappy because they feel like people in power aren’t doing a good enough job and they aren’t being listened to; it means that Person Y feels like they are having to pay the tax of dealing with criticism just for trying to help, and it makes them less likely to want to help at all. The article I linked to on do-ocracy has some nice examples of this dynamic, suggesting that this is a widespread phenomenon.
- ^
I added “culture” late on in drafting this post. But the more I reflect on this, the bigger a deal I think it is. Burning Man is centralised in the sense that there’s a single organisation that runs it, but the culture it tries to cultivate at least aspires to be semi-anarchist. In EA, we see both decentralised and centralised cultural elements. It’s a decentralised culture insofar as, relative to many other cultures, it prizes independence of thought, and is open to contrarianism. It’s centralised insofar as people are often highly scrupulous, and can feel like they’re being a “bad EA” in some way if they aren’t acting in line with the wider group, and will be negatively judged. I think the highly critical culture, especially online, contributes to pressures towards conformity as a side-effect; people worry that if they say or do something different, they’ll get attacked. Personally, at least, I think that this latter aspect is one of the threads within EA culture I’d most like to see change.
- ^
We can make “decision-making power” more precise by breaking it down into three sub-types. You can take action because someone else has told you do that action for a number of different reasons, including:
Authority: When you do X because Y has told you to do X and because there’s some power relationship between you and Y (e.g. boss and employee) such that Y could and would inflict bad consequences on you (e.g. docked pay) if you don't do X.
Deference: When you do X because Y thinks you should do X, and you trust their judgement. You might not know or understand Y’s reasons behind wanting X to happen.
Persuasion: When you do X because Y thinks you should do X, and convinces you with compelling reasons why doing X is a good idea.
I think that EA, in practice, is fairly decentralised if we’re looking at Authority (it’s very rare that I see someone giving orders and others following those orders without at least broadly understanding and (at least to some extent) endorsing the reasons behind them), and in the middle on Deference and Persuasion (I think it’s fairly common for people to work on specific areas because they think that better-informed people than them think it’s important, even if they don’t wholly understand the reasons). In general, I would like more of a move towards Persuasion over Deference, but that move is not trivial: there are major benefits from division of intellectual labour, and a significant amount of intellectual division of labour is inevitable. - ^
Someone on the Forum made this point earlier in the year. I forget who, but thank you!
- ^
This argument for free markets comes originally from The Use of Knowledge in Society by Friedrich Hayek (more here). I don’t know what the best source to learn about this is; a quick google suggests that this is helpful; GPT-4 also gives a reasonable overview.
- ^
For more discussion of the EA marketplace analogy, see Michael Plant’s essay here, and comments.
- ^
This was a significant issue in the earlier days of EA. See for example, this discussion of Intentional Insights.
- ^
When I was getting to grips with climate economics, it was striking to me how long the reliance on integrated assessment models had persisted, despite how inadequate they seemed to be. One explanation I heard was founder effects: Bill Nordhaus was the first serious economist to produce seminal work on climate change, and pioneered integrated assessment models. That resulted in a sort of intellectual lock-in .
- ^
Of course, EA is defined by a particular mindset, set of interests, and moral and methodological views, so it can’t be open to any set of beliefs. (Trivially: if you want to maximise suffering, you don’t have a place in EA.) It’s a hard question what we should lock in as definitional of EA, and what we shouldn’t. I presented my earlier attempt at this in my article on the definition of effective altruism (which received significant help in particular from Julia Wise and Rob Bensinger) and in CEA’s guiding principles, which I helped with.
- ^
For more on what constitutes a field, here’s an edited take from GPT-4, which I think is pretty good: “A "field" can be defined as a specific area of knowledge or expertise that is studied or worked in. It's an area that has its own set of concepts, practices, and methodologies, and often has its own community of scholars or practitioners who contribute to its development.
Fields are often characterised by their methods, by a body of knowledge within them, by a community of scholars or practitioners who contribute to the field, by institutions and organisations that support that community, and by a set of goals and values.”
This thought seems continuous with how CEA’s comms team is thinking about things. - ^
In footnote 5 I distinguish between different sorts of decision-making influence. What I’m aiming towards is trying to reduce the amount of Authority I have, and try to discourage Deference.
- ^
Some people who gave comments thought that this name is actually a way in which EA is decentralised - because anyone can comment and influence how EA is perceived. But it seems to me like it at least increases the extent to which third parties see EA as A Single Thing. In analogy, if either of Leiter Reports or Daily Nous (the two main philosophy blogs) were called “The Analytic Philosophy Forum”, that would seem like a move in the direction of centralisation to me, at least on the Perception dimension. But perhaps this is just a case where it’s not clear what “centralised” vs “decentralised” means.
Thank you! This post says very well a lot of things I had been thinking and feeling in the last year but not able to articulate properly.
I think it's very right to say that EA is a "do-ocracy", and I want to focus in on that a bit. You talked about whether EA should become more or less centralized, but I think it's also interesting to ask "Should EA be a do-ocracy?"
My response is a resounding yes: this aspect of EA feels (to me) deeply linked to an underrated part of the EA spirit. Namely, that the EA community is a community of people who not only identify problems in the world, but take personal action to remedy them.
- I love that we have a community where random community members who feel like an idea is neglected feel empowered to just do the research and write it up.
- I love that we have a community where even those who do not devote much of their time to action take the very powerful action of giving effectively and significantly.
- I love that we have a community where we fund lots of small experimental projects that people just though should exist.
- I love that most of our "big" orgs started with a couple of people in a basement because they thought it was a
... (read more)+1.
I was slow to realise that, over the period of just a few years of growth, this bunch of uncertain, scrappy, loosely coordinated students had come to be seen as a powerful established authority and treated accordingly. I think many others have been rather slow to notice this too and that that's been a big source of confusion and tension as of late.
Thanks for this comment, it’s very inspiring!
One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:
“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s working on this. All these smart people, they’re on the ball, they got it,” right? But no, they’re not. If you don’t see something covered, my cold take is like, cool, maybe it’s actually not that impactful, maybe it’s not a good idea. But whatever: try to push it, get feedback, put it out there, talk to people and see if this is a useful thing to do.
You should, in general, expect there are more unsolved problems than solved problems, particularly in such a young field, and where we just need so many people to work on this. So yeah, if you have some ideas of how your niche can contribute, or certain things where you don’t think it’s im... (read more)
I think the upside is that if it is "generational" people grow up and become more agentic as long as we foster the culture. I was remarking to a friend that it's interesting how people don't want to get up and learn to code to help with AI Safety (given the rates of AI doomerism) but people were willing to go into quant trading at seemingly higher rates to earn to give in early EA.
What cultural and structural features do you think might contribute to the perceived decline in a just-do-it attitude?
While I think there is considerable merit to what you're saying, I think it's also important to acknowledge the existence of challenges for would-be doers in 2023 that weren't necessarily (as) present in 2008 or 2013. Some of these challenges are related to the presence and/or actions of big organizations and funders (e.g., the de-emphasis on earning to give affecting the universe of potential viable funders for upstarts). Others are related to changes in the meta more generally (e.g., a small group birthing a startup in the first wave's signature cause area -- global health -- without outside help or funding is probably easier than doing the same in AI safety).
(this is just personal anecdote, so it shouldn't be interpreted with too much confidence. Like all anecdotes, it may not generalize)
I only started to discover EA in 2020, so I think it is reasonable to say that I am of the newer "EA generation." There are a few things that I've vaguely noticed within myself when I've thought of starting projects. Some are social/prestige/reputational things, some are financial stability things, and some are related to lack of skills. I'll phrase these as "things my brain tells me, whether I agree with them or not:"
- There are organizations with fairly wide-ranging remits that already exist, so I probably don't need to start Project X, because they have more connections/expertise/context and are more well-placed to start it.
- I don't have the skill/knowledge/experience to do Project X well. The people in the EA community have really high standards, so I probably wouldn't get clients for my consulting firm or funding for my charity if I am only able to do it fairly well, because they would want me to do it extremely well.
- I don't want to s
... (read more)[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I'll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don't understand how you reached the conclusion that it's not that centralised. It seems very centralised - at least, for something portrayed as a social movement.
Why does it matter to determine how 'centralised' EA is? I take it the implicit argument is EA should be "not too centralised, not too decentralised" and so if it's 'very centralised' that's a problem and we consider doing something. Let's try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, "not that centralised",... (read more)
These are two examples, but I generally didn't feel like your reply really engaged with Will's description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said "no, it is centralised!".
I don't agree with this at all. IMO democracy often has the opposite effect, and many decentralized communities (e.g. the open-source community) have zero democracy. But I think this needs me to write a full post...
This seems false to me. If the only kind of decision you think matters is funding decisions, then sure, those are somewhat centralised. But that's not everything, and it's far from clear to me why you think that's the only thing that matters?
For example, as Will discusses in the post,... (read more)
This is a tangent, but I thought I'd say a bit more about how we've done things at EA Norway, as some people might not know. This is not meant as an argument in any direction.
Every year, we have a general assembly for members of EA Norway. To be a member, you need to have paid the yearly membership fee (either to EA Norway or one of the approved student groups). The total income from the membership fee covers roughly the costs of organising the general assembly. The importance of the membership fee is mainly that it's a bar of entry to the organisation, makes it clear if you're a member or not, and it's nice and symbolic that the fees can cover the general assembly. However, I think the crucial thing about how we're organised at EA Norway isn't that members pay a fee, but that the general assembly is the supreme body of the organisation.
During the general assembly, the attending members vote on an election committee, board members, and community representatives. During the general assembly, the members can also bring forward and vote on changes to the statutes and resolutions. Resolutions are basically requests members have for the board, that they're asking the board to look into ... (read more)
Thanks for the nudge! Yeah I should have said that I agree with a lot of your comment. There are a few statements that are (IMO) hyperbolic, but if your comment was more moderate I suspect I would agree quite a lot.
I disagree though that this is a "minor correction" – people making (what the criticized person perceives as) uncharitable criticisms on the Forum seems like one of the major reasons why people don't want to engage here, and I would like there to be less of that.
I think Efektivni Altruismus is similar (e.g. their bylaws state that members vote in the general assembly), and it has similarly been supported by a grant from CEA.
I'm glad someone mentioned national membership associations! I haven't done a formal tally but I think Germany and Switzerland are also membership associations. I quite like the idea for EA Netherlands (I'm the co-director but here I'm speaking in a personal capacity).
If we had more national membership associations we could together set up a supranational organisation to replace much of CEA. Like other membership associations, this would have a general assembly, a board, committees, and an executive office. It'd be different from Michael's suggestion in that the fee-paying would be done by the national orgs. I.e., the members would be EA Switzerland, EA Netherlands, etc., and they would send delegates to the General Assembly.
This organisation could then provide relevant public goods, e.g., international networking via the EAG event series and the EA Forum, community-building training via the CBG programme, or anything else its members might consider valuable (e.g., advocacy work). Off the top of my head, an analogous organisation might be the Dutch Association of Municipalities (VNG). You can read about how the VNG is governed here and what they do here.  ... (read more)
I think one large disadvantage of a membership association is that it will usually consist of the most interested people, or the people most interested in the social aspect of EA. This may not always correlate with the people who could have the most impact, and creates a definitive in and out.
I'd be worried about members voting for activities that benefit them the most rather than the ultimate beneficiaries (global poor, animals, future beings).
EA isn't a political party but I still think it's an issue if the aims of the keenest members diverges from the original aims of the movement, especially if the barrier to entry to be a member is quite low compared to being in an EA governance position. I would worry that the people who would bother to vote would have much less understanding of what the strategic situation is than the people who are working full time.
Maybe we have had different experiences, I would say that the people who turn up to more events are usually more interested in the social side of EA. Also there are lot of people in the UK who want to have impact and have a high interest in EA but don't come to events and wouldn't want to pay to be a member (or even sign up as a member if it was free).
I think people can still hold organisations to account and follow the money, even if they aren't members, and this already happens in EA, with lots of critiques of different organisations and individuals.
For better and/or for worse, the membership organization's ability to get stuff done would be heavily constrained by donor receptivity. Taking EA Norway as an example, eirine's comments tell us that (at least as of ~2018-2021), "[t]he total income from the membership fee covers roughly the costs of organising the general assembly," that "board made sure to fundraise enough from private donors for" the ED's salary, but that most "funding came from a community building grant from the Centre for Effective Altruism (CEA)" (which, as I understand it, means Open Phil was the primary ultimate donor).
To me, that both constrains both how thoroughly democratic a membership association would be and how far afield from best practices a democratic membership association could go.
I'm not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
I didn't start off writing this comment to be snarky, but I realized that we are, kind of, doing most of these things. Do we intend to? Should we maybe not do them if we think we want to push away from centralization?
Thanks! I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)
Some things I didn't understand from your bullet-point list:
By “resources” do you primarily mean funding? (I'll assume yes.)
Here, by “resource” do you here mean information (books, etc)? (I'll assume yes.)
This doesn't clearly map onto "centralised" vs "decentralised" to me?
Of your list, the first two bullet-points seem non-desirable to me in a totally ideal world. But of course having lots of funding from OP is much, much better than not having the funding at all!
The second two bullet points seem good to have, even if EA were more decentralised than it is now.
Yeah, sorry, I wrote the comment quickly and "resources" was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the "decentralized people" have a high-salience moment where they realize that what's happening privately isn't what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it's their money. No one is stopping anyone from getting their own funding, and doing their own thing.
Except for the fact that 80k (and other though leaders? I'm not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community.
(Just wanted to add a counter datapoint: I have been a local community organizer for several years and this has not been my experience.)
Talking from my time in EA NTNU, my experience was indeed the complete opposite. Funding and follow up from CEA was excellent, kind and thoughtful. There were virtually zero strings attached and at no point did I feel like they were controlling.
The feelings of other organisers might differ of course, but I've not heard about this from anyone personally, and I did talk to quite a lot of student group leaders around 2017-2019.
Again, this is just my experience.
I wasn't sure about the 'do-ocracy' thing either. Of course, it's true that no one's stopping you from starting whatever project you want - I mean, EA concerns the activities of private citizens. But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.
I think there is a steelman of your argument which seems more plausible to me, but taken at face value this statement just seems clearly false?
E.g. there are >650 group organizers – how many of them do you think have met the people on that "senior EA's" list even once? I haven't even met everyone on the list, despite being on it!
When I think of highly centralized "conventional organizations" I think of Marissa Mayer at Google personally choosing the fonts of new projects and forcing everyone to queue outside her office because even executives weren't allowed to make decisions without her in-person approval. This seems extremely far from how EA works?
Yeah, I guess I mean genuinely new projects, rather than new tokens of the same type of project (eg group organisers are running the same thing in different places).
As MacAskill points out, it's pretty hard to run $1m+/yr project (or even less, tbh) without Open Philanthropy supporting it.
But, no, I'm not thinking about centralisation in terms of micro management, so I don't follow your comment. You can have centralised power without micromanagent.
To take one of the top examples in the post's centralization continuum, presumably the US military counts as having a highly centralized power structure despite the President and Secretary of Defense not micromanaging. People lower on the food chain exercise power delegated and re-delegated from those two, but they are the ultimate fount of power.
They have the right to control -- and responsibility to supervise -- the powers they have delegated downward. With some uncommon arguable exceptions like military judges, one in the military has or exercises power independently of POTUS and SECDEF. And people know that if they use their delegated power in ways that would anger those higher up, they won't have that power for too much longer.
That's how power ordinarily works in larger centralized contexts; big-company CEOs refusing to delegate font-approval authority is very much the exception.
Whether you could get someone nominally 'under' you to do an arbitrary thing is not a good proxy for power.
CEA is a regular hierarchical company, but it would still go very poorly if you decided to, on a power trip, tell one of your employees what to eat for lunch. This mostly doesn't matter, though, because that is a goal you are very unlikely to have.
As a co-organizer of the Boston Meetup, if you sent me an email demanding that we serve potatoes at the next gathering, I would be very confused. But you could get CEA's groups team to come up with guidance on meetup food, heavily influence that process, and I could then receive an email advocating serving potatoes from people I trusted and who I was pretty sure had thought about it a lot more than I had. Which would have a decent chance of resulting in potatoes at the next meetup.
Power is always, in a technical sense, indirect: no one is pulling levers inside other people's heads to get them to do things. There is always some amount of inspiration, persuasion, threat, or other intermediary. Sometimes this is formalized, sometimes "soft", but that mostly only matters for legibility. Maybe a better measurement for power is something l... (read more)
I think there's an important difference to be made between "level of centralization" in general and "level of power centralization." When people are saying "EA is too centralized," I think they are predominately referring to the latter concept.
Moreover, to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score, I don't think that would be correct. Rather, I think the overall centralization measure is strongly influenced by the sub-dimension with the highest centralization score, especially where that dimension is decision-making / control of resources.
As an ... (read more)
“to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score.”
Thanks for pointing this out! I didn't intend my post to be taking the mean score across sub-domains; I agree that of the dimensions I list, decision-making power is the most important sub-dimension. (Though the dimensions are interrelated: If you can’t tightly control group membership, or if there isn’t legal ownership, that limits decision-making power in some ways.)
To make sure I understand your view better, on my spectrum (From North Korea to the US startup scene) do you think I placed EA-as-it-currently-is too low on the centralisation spectrum? I said current EA is “similar to fairly decentralised religious groups, and many social movements”.
(Fine if your answer is “this spectrum doesn’t make any sense” -> it’s pretty subjective!)
Hi Will,
Thanks for the post. I think the below statement is inaccurate
Whilst I agree OP is the large majority as you mention and the concentration of decision making within that could be a problem, you could have movement building project with budget over $1m a year not having funding from OP - Longview is an example.
On Vaidehi’s post, I went back to my record and my donation alone is more than 3x the total in other donors category in her post. If other donors are included it could be out by more than 15x. I am working with Vaidehi to get a more accurate total.
I do agree that it is important to diversify the donor base and the many effective giving initiatives are important in that regard.
Thanks for this post! One thought on what you wrote here:
I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the best (or probably just in the middle of both worlds)
e.g. We have upsides of fairly tightly knit information/feedback/etc. networks between people/entities, but also the upsides of there being no red tape on people starting new projects and the dynamism that creates.
Or as another example, entities can compete for hires which incentives excellence and people doing roles where they have the best fit, but also freely help one another become more excellent by e.g. sharing research and practices (as if they are part of one thing).
Maybe it just feels like we're in the worst of both worlds because we focus on the negatives.
This seems true to me, although I don't have great confidence here.
For some years at times I had thought to myself "Damn, EA is pulling off something interesting - not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it's so effective and at the same time feels so inclusive." Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so many other similar groups?
It's possible that there is a limit on how long you can pull it off when community grows, but I would be a bit slow to update during turbulent waters - there is for sure valuable signal during these (like "how well are we handling harsh situations?"), but also not so valuable ("is our ship fast?").
Good explanation of core competencies - https://forum.effectivealtruism.org/posts/kz3Czn5ndFxaEofSx/why-cea-online-doesn-t-outsource-more-work-to-non-ea
(General question, not necessarily for Will in particular)
Re getting another regrants program started: has there been a look at how this went with Future Fund's regranting program? I viewed it as pretty experimental, and I don't have much sense of whether someone's looked at the pros and cons of that system. Obviously that project came to a sudden end, so I understand why any planned analysis didn't happen as planned.
I think MoreGood would be a great rebrand for the forum!
Just want to register some disagreement here about the name change, to others in this thread and Will (not just you Gemma!). In rough order of decreasing importance:
I do accept it was just a small draft suggestion though.
Some thoughts from me (as a big fan of MoreGood):
I don't think it would signal this to many people.
To me this is a feature, not a bug. I personally think having a slightly higher barrier to entry (you have to be engaged enough to have found the forum via other means than the first page of Google results) would do this forum good overall.
I think having a very descriptive name is probably not worth the increase in times this forum gets quoted with more apparent authority than it actually has. [Edit: This is quite theoretical. These are the ... (read more)
I like there being a centralised forum which attempts good epistemics.
Let's compare to twitter, where incentives are towards controversy and views, I am glad that there is a nexus of EA comment on this forum.
I don't know that a decentralised set of forums would have been able to reduce the presence of community discourse, and I think that has been healthy for us as a community.
In short, I am not sure that we are well integrated enough as a community (particularly at the speed of growth) to be decentralised fully across digital environments.
Good name though
I can't help but notice that MoreRight is the inverse of LessWrong, even though I like MoreGood far way better than MoreRight. 😂
FYI to LW old-timers, "MoreRight" evokes the name of a neo-reactionary blog that grew out of the LW community. But I don't think it's a thing anymore?
In my opinion, the largest effect of rebranding the name of the forum is that newcomers searching for "effective altruism" for the first time would be less likely to find the forum, particularly if alternatives to the forum do some SEO. This has both upsides (people are less likely to be intimidated/skeeved out by weird stuff or community drama, people's first exposure to EA-in-practice wouldn't be filled with Extremely Online people), and downsides (whatever else they see instead may be less good as introductions, eg by being more manufactured to be presentable, rather than having mostly earnest conversations).
I'm not convinced that a name change would be net positive, but if we want to make it clearer than the forum doesn't necessarily represent EA, one option is to have the name be less descriptive and just reference something vaguely positive instead (ideas include: polaris, salon, agora, zephyr, etc). This is akin to how Sierra Club is clearly not representing all of environmentalism, and Leiter Reports is clearly not representing all of philosophy.
I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.
So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.
Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.
I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.
IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.
[Written in a personal capacity, etc. This is the second of two comments, see the first here.]
In this comment, I consider how centralised EA should be. I’m less sure how to think about this. My main, tentative proposal is:
We should distinguish central functions from central control. The more central a function something has, the more decentralised control of it should be. Specifically, I suggest CEA should become a fee-paying members’ society that democratically elects its officers - much like the America Philosophical Association does.
I suspect it helps not just to ask “how centralised should EA be” but also “what should be centralised and what shouldn’t?”. Some bits are, as you say, natural monopolies in that it’s easiest if there’s one of them. This seems most true for places where people meet and communicate with each other: a conference is valuable because other relevant people are there. For EA, I guess the central bits are the conferences, the introductory materials, the forum, the name(?), maybe other things. In my post on EA as a marketplace, which you kindly reference but don’t seem sympathetic to, I point out you can think of EA on a hub-a-spoke model. Imagine a bi... (read more)
I think it's a mistake to conflate making things more democratic or representative and making them more decentralised - historically the introduction of more representative institutions facilitated the centralisation of states by increasing their ability to tax cities (see e.g. here). In the same way I would expect making CEA/EVF more democratic would increase centralisation by increasing their perceived legitimacy and claim to leadership.
I'm confused about the mathematics of a a fee-paying membership society. I'm having a hard time seeing how that would generate more than a modest fraction of current revenues.
It's not clear what the "central convening and coordinating parts" are. Neither Current-CEA nor Reformed-CEA would have a monopoly on tasks like funding community builders, funding/running conferences, and so on. They are just another vendor who the donors can choose to hire for those purposes. There is and would be no democratic mandate that donors who would like to fund X, Y, and Z are obliged to go through CEA.
I think your model is correct insofar as the membership society could assert independent control of certain epistemically critical functions that are relatively less reliant on funding (e.g., the Forum).
The extent to which "convening and coordinating" is effective may depend on whether there is money behind those efforts. Stated more directly, to what extent are CEA's efforts in these areas boosted by the well-known (general yet strong) alignment between CEA and the major funder in the ecosystem? Would Reformed-CEA enjoy the same boost?
I used to work at EA Norway, which is a fee-paying membership society, and thought it might be useful to share more on how our funding worked. This is just meant as an example, and not as an argument for or against membership societies. (Here's a longer comment explaining how we organise things at EA Norway.)
I can't speak to EA Norway's current situation, as I no longer have any position at EA Norway (other than being a paying member). However, I can say what it was like in 2018-2021 when I was Executive Director (ED). The total income from the membership fee roughly covered the cost of the general assembly. Most of our funding came from a community building grant from the Centre for Effective Altruism (CEA). However, the board made sure to fundraise enough from private donors for my salary. The two main reasons for this was to I) diversify our funding, and II) enable us to make longer term plans than CEAs grant periods.
When the board gave approval to accept the community building grant from CEA, we discussed that if at any point we did not want to follow CEAs guidelines and success metrics, we would pay back the remainder of the grant. This was definitely easier for us to say and ... (read more)
Okay, but the American Philosophical Association "was founded in 1900 to promote the exchange of ideas among philosophers, to encourage creative and scholarly activity in philosophy, to facilitate the professional work and teaching of philosophers, and to represent philosophy as a discipline" with a modern mission as follows " promotes the discipline and profession of philosophy, both within the academy and in the public arena. The APA supports the professional development of philosophers at all levels and works to foster greater understanding and appreciation of the value of philosophical inquiry." Seems like a membership structure works well.
If, on the other hand, the APA's mission was to "help solve the greatest philosophical problems of our time by supporting philosophers" or some such, I personally think that a more meritocratic approach seems like a better fit. It's certainly not obvious to me that a democratic membership structure would be superior.
Or if it were a charity that ultimately had a global mission, I'd hardl... (read more)
Fair -- but you probably wouldn't pick EA's structure either.
We like our current main billionaires, but from an ex ante perspective relying on billionaires to discern who the right leaders and technocrats are seems dicey. And of course, from the ex post perspective, we've had one awfully bad billionaire.
How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!
One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?
This tweet told me a lot:
Not sure about Tuna.
Given that Open Phil is responsible for a large share of EA funding, including
apparently 70%a large share of movement building funding, too, should we consider them largely responsible for EA as a whole, even if not solely responsible?I'm certainly not an expert in institutional design, but for what it's worth, it feels really non-obvious to me that:
Like, I think projects find it pretty hard to escape the sense that they're "EA" even when they want to (as you point out), and I think it's pretty easy to decide you want to be part of EV or want to take your cues from the relevant OP team and do what they're excited to fund, whereas ignoring consensus around you, taking feedback but doing something else, and so on, seem kind of hard, especially if your inside view is interested in doing something no one wants to fund!
I see EV as an EA organization, historically, by name ("effective"), by its board composition and by some of its own statements, especially its mission.[1] If an org doesn't want to be perceived as part of the EA movement and potentially entangled with it in other ways, should they be housed by EV?
Yes I was very surprised to hear the suggestion that Longview, Wytham or the Gov.ai were not EA projects! This is also contradicted by previous statements from the board of EVF:
and again:
To make it even more clear, many of these projects used to be part of CEA.
I feel like orgs don't get many benefits from being "publicly EA", but they get some costs.
The narrow EA community seems good at knowing which projects are "basically EA".
I think to non-EAs, the EA brand might be more of a liability for many orgs than a plus. (It also can be a liability for EA, in that if the org does poorly, EA could be blamed, like with FTX)
A factor in favour of a more coherent EA that this post misses is the importance of policy advocacy.
I think in almost all of the spheres we care about governments hold most of the levers. For instance, it would be within the power of many countries to unilaterally solve the global insecticide-treated bed net problem if they were sufficiently motivated. I could make this a very long list.
As a public servant and ministerial adviser, I've been on the receiving end of well-coordinated campaigns by global not-for-profits. They're extraordinal good at getting their way. The example I usually think of is Save the Children. It's hard to know for sure, but StC seems to have fewer people involved than EA and less money. But they have significant access to the leaders of dozens of countries; the ability to drive multilateral agreements through international decision-making bodies (see the UN Declaration on the Rights of the Child); and genuine geostrategic influence.
The EA movement has all the ingredients (global reach; motivated talent; money) necessary to have influence of that kind, or more. But we chose not to (for many of the reasons outlined above), and I think our impact suffers hugely because of that choice. I think we've made a bad deal. I would much rather we paid the price of coordination, managed the risks that result, and used it to be serious players in global policy.
Here is an anonymous poll.
You can see what others think and add your own points (by using the little edit button)
https://viewpoints.xyz/polls/decision-making-and-decentralisation-in-ea
see results: https://viewpoints.xyz/polls/decision-making-and-decentralisation-in-ea/analytics
What the poll looks like
Current agreement:
Current uncertainty:
Very well written and eye-opening post, thanks Will!
Something I'd be really excited to see, and that I think would be really useful for community builders when doing outreach/speaking to people very new to the movement:
Would one solution to the lack of diversity in funders be to break up OpenPhil? And I don't just mean separate their different teams, I mean take some of their assets and make another completely separate and independent eg Longtermist grnatmaking organisation, with different staff, culture etc
Just a note that while forum users might have opinions on this proposal, this is ultimately a question for Cari and Dustin (I think this point is too often forgotten).
Yeah I didn't mean to accuse you of having forgotten that point (the language was a little mercenary but I assumed you weren't being literal), I just think it's worth reminding forum users in general to keep this in mind throughout any further discussion.
Thanks for this post, Will. I believe you've touched on many points that many of us have been pondering. I've translated it into Spanish, as I feel it's relevant to the entire community.
I want to just appreciate the description you’ve given of interaction responsibility, and pointing out the dual tensions.
On the one hand, wanting to act but feeling worried that by merely getting involved you open yourself up to criticism, thereby imposing a tax on acting even when you think you would counterfactually make the situation better (something I think EA as a concept is correctly really opposed to in theory).
On the other hand, consequences matter, and if in fact your actions cause others who would have done a better job not to act, and that’s predictable, it needs to be taken into account. This is all really tough, and it bites for lots of orgs or people trying to do things that get negative feedback, and it also bites for the orgs giving negative feedback, which feels worth bearing in mind.
As FTX was imploding, Will wrote on Twitter "If FTX misused customer funds, then I personally will have much to reflect on." It now seems very clear that FTX did misuse customer funds (1, 2), but to my knowledge Will hasn't shared any of his reflections publicly, beyond that initial Twitter thread. It seems odd to me for him to offer thoughts on the best way forward for the movement without acknowledging or having reckoned in a substantive way with his own role in the largest challenge faced by that movement to date.
If Will has published a post-morte... (read more)
See here for more context.
I think this is a little unfair to Will. If an independent investigation asks you not to discuss something then presumably this is because they worry that you speaking would interfere with their investigation (perhaps they think it's valuable to get independent views of what happened, rather than views informed by dialogue between different parties).
To my mind, if Will refused to heed a request from an independent investigation this would be strong evidence that he hadn't learned the lessons of FTX (that he hadn't learned the importance of good governance norms). The fact that he's heeding the request, despite clearly wanting to speak out, I think is at least weak evidence that Will has learned valuable lessons here. I certainly think it's unfair to call this a cop out.
Most philanthropy is not from billionaires, so the fact that most EA philanthropy is from billionaires means that EA has been unusually successful at recruiting billionaires. This could continue, or it could mean revert. So I do think there is hope for more funding diversification.
How is this on the decentralisation list?
Don't you think there are some minimal values that one must hold to be an Effective Altruist? E.g. Four Ideas You Already Agree With (That Mean You're Probably on Board with Effective Altruism) · Giving What We Can.
It seems to me that there are some core principles of Effective Altruism such that if someone doesn't hold them, I don't think it'd make sense to consider them an Effective Altruist.
To be clear, I don't disagree that anyone can call themselves part of the EA movement. I'm more wondering whethe... (read more)
1. Overall thoughtful and helpful, but one major error which I hope you will be relieved to know about, and I'm sure others will:
>Assuming I’m right that, currently, perception doesn’t match reality, it means the core projects and people in EA should communicate more about what they are and are not taking responsibility for.
I think this is very unlikely to be successful, and places a huge unwelcome "should" on a bunch of busy EAs, some of who won't be good at doing PR/comms/promo work on their own role.
It would be much better, easier and quicker t... (read more)
On diversity, the biggest deficit is language and all continents diversity, and with that come both conscious and unconscious limitations. This could be addressed, through:
(a) existing and future granting programmes
(b) real commitment to acceleration in Asia-Pacific, Africa, Latin America etc
... maybe micro-offices in those continents?
(c) job ad placements "always in UN languages and Global South before english" to give non-native English speakers a fair chance / time to translate etc
(d) translation of headlines of important news / tweets into UN security council languages
(e) I have more but it's late, call me?