Hide table of contents

(This is a section from a previous post that combined two ideas into one.  I thought it would be good to separate out the second idea and explore it more.)

 

It may be better to see EA as coordination rather than a marketing funnel with the end goal of working for an 'EA organisation'. 

There is still a funnel where people hear about EA and learn more but they use the frameworks, questions and EA network to see what their values are and what that means for cause[1] and career selection.

The left side of the diagram below is similar to the original funnel model from CEA, with people engaging more with EA. Rather than seeing that as the endpoint, people can then be connected to individuals and organisations in the fields they have a good fit for.

Focusing on the right side of the diagram, I've tried to represent some fields that people often consider after looking into EA. The size of the boxes aim to represent the different sizes of the fields[2], and how much overlap[3] they have with EA.

 

What could be seen as EA is the meta research, movement building and crosscutting support. Whereas organisations working on a specific cause area are in a separate field rather than part of EA (which isn't a bad thing).

It's possible that by focusing on EA as a whole rather than specific causes, we are holding back the growth of these fields. It would be surprising if the best strategies for each field were the same as the best strategy for EA.

 

What would visualising EA in this way mean for movement building?

  • More movement building on the field specific level
    • Support for cause areas to have their own version of the Centre for Effective Altruism and equivalent meta organisations
  • Less emphasis on leading people down a chain of reasoning (for example, effective altruism-->longtermism --> existential risks --> biosecurity), where they may drop off at any point, and more emphasis on connecting people directly to a field
  • More research on how to find, incubate and grow causes
    • This could lead to more meta organisations (the Centre for Effective Centre's)

One example would be when designing an EA conference the attendees would mainly be people who are undecided as to which cause/career to go into, people that can help them decide,  key EA stakeholders from each field, and people in nascent fields . This is compared to a conference that had many people that had already decided which cause area to focus on, they would probably find more value from going to a conference tailored to help dive into higher level questions where everyone had a deeper shared level of understanding.

One key issue is that there are organisations for specific causes but they tend to focus on research first, comms or lobbying second, and community building is third or fourth in their priorities. The organisation might occasionally arrange a conference every few years or some fellowships, but they generally don't have their top goal as movement building.  When something is a third priority, it often doesn't get done well, or happen at all.  This is in comparison to CEA, which I think has helped grow EA by having movement building be the top priority.

There are some projects in these spaces and I've attempted to list a few of them here, but there are still quite a few gaps, and the organisations that do exist are generally small and don't face much competition.

 

Field Building Gaps

  • Global Development
  • Longtermism
    • Giving money- Not much for individuals, but foundations are attempting to work out what to fund, and there is the Long Term Future Fund
    • Career - 80,000 Hours
    • Coordination - There doesn't seem to be any one organisation doing this, although there are a variety of projects like this and there is a newsletter
  • AI Alignment
    • Giving money - There is a yearly post by Larks, given that it seems that it isn't hard to fund good projects in this space, this probably isn't much of an issue
    • Career - AI Safety Support for technical research and 80,000 Hours for technical and policy
    • Coordination - The Future of Life Institute has organised some small conferences, but their remit is wider than just AI alignment
  • Animal Welfare
    • Giving money - Animal Charity Evaluators
    • Career - Animal Advocacy Careers
    • Coordination - Not much but there is a new project aiming to cover this gap
  • Alternative Proteins
    • Good Food Institute seem to help coordinate money, careers and the field as a whole
  • Biosecurity
    • I couldn't think of an organisation for any of the three categories, although there are informal networks and a new hub being set up in Boston
  • Existential Risk
    • Careers - 80,000 Hours
    • Coordination - CSER, FHI, GCR and FLI all do parts of this, but they tend to focus on research rather than having movement building be a top priority
  • Suffering Risks
    • Coordination - Center on Long-Term Risk, although they focus mainly on research
  • Environmentalism
    • Giving money - Giving Green, Founders Pledge
    • Careers - Work On Climate has a very active slack community but is only tangentially related to EA
    • Coordination - Effective Environmentalism, but it's only volunteer run

There are lots of other causes that could be added here and they often have much less field building infrastructure.

With fields that are already large there are usually organisations that do some of this work, and it may not help to reinvent the wheel.  Although it is still worth considering if there is value to coordinating the people interested in EA within a larger cause. One example is that there are thousands of global development conferences, but none for EA & global development. I think there would be value to having that organised, allowing for people in EA to tackle the most important questions in the field, and allowing people in global development to get a strong intro to EA if it is their first event.

 

If anyone is interested in tackling one of these gaps, I'd love to chat about it and see if there is a way I can help, just send me a message.

  1. ^

    I use field and cause interchangeably throughout

  2. ^

    If this was done to scale, then the amount of money/people/organisations in global development would probably be hundreds or thousands of times bigger than the other fields. 

    Also lots of interventions often help in multiple fields, for example alt proteins impacting climate change, land use, etc. I haven't attempted to take that into account.

  3. ^

    This is a rough guess and represented by how much of the green funnel overlaps with the different fields

  4. ^

    I mean an organisation that does some of the following; conferences and other events, online discussion spaces, supporting subgroups and organisers, outreach, community health, connecting members of the network with each other

  5. Show all footnotes
Comments3
Sorted by Click to highlight new comments since:

I found the concrete implications distinguishing between this more cause-oriented model of EA really useful, thanks!

I also agree, at least based on my own perception of the current cultural shift (away from GHD and farmed animal welfare, and towards longtermist approaches), that the most marginally impactful meta-EA opportunities might increasingly be in field-building.

When I was at EAG London this year, I noticed that there was a fair amount of energy and excitement towards AI Safety specific field building. I'm fairly keen on this since a lot has to go right in order for AI safety to go well and I think this is more likely when there are people specifically trying to develop a community that can achieve these goals, rather than having to compromise between satisfying the needs of the different cause areas.

One thought I had: If there is an EA conference dedicated to a specific cause area, it might also be worthwhile having some boothes related to other EA causes areas in order to address concerns about insularity.

I think this is a good idea. I feel there might be enough for EA adjacent to Progress Studies for this to be a field. I think Tom Westgarth was interested here too and in London you have a small progress cluster.

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 6m read
 · 
This is a crosspost from my new Substack Power and Priorities where I’ll be posting about power grabs, AI governance strategy, and prioritization, as well as some more general thoughts on doing useful things.  Tl;dr I argue that maintaining nonpartisan norms on the EA Forum, in public communications by influential community members, and in funding decisions may be more costly than people realize. Lack of discussion in public means that people don’t take political issues as seriously as they should, research which depends on understanding the political situation doesn’t get done, and the community moves forward with a poor model of probably the most consequential actor in the world for any given cause area - the US government. Importantly, I don’t mean to say most community members shouldn’t maintain studious nonpartisanship! I merely want to argue that we should be aware of the downsides and do what we can to mitigate them.    Why nonpartisan norms in EA are a big deal Individual politicians (not naming names) are likely the most important single actors affecting the governance of AI. The same goes for most of the cause areas EAs care about. While many prominent EAs think political issues may be a top priority, and politics is discussed somewhat behind closed doors, there is almost no public discussion of politics. I argue the community’s lack of a public conversation about the likely impacts of these political actors and what to do in response to them creates large costs for how the community thinks about and addresses important issues (i.e. self-censorship matters actually). Some of these costs include:  * Perceived unimportance: I suspect a common, often subconscious, thought is, 'no prominent EAs are talking about politics publicly so it's probably not as big of a deal as it seems'. Lack of public conversation means social permission is never granted to discuss the issue as a top priority, it means the topic comes up less & so is thought about less, and i
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely