Hide table of contents

 

TL;DR: we should apply the same standards to movement-building proposals that we apply to scientific theories.

[I've written a lot about this topic previously for the CEA team, so I'll write a more easily digestible series of posts rather than a single over-long one. Also: my team is currently in the midst of inviting speakers to EA Global, so please pardon these posts for being patchy in terms of quality and detail.]

A motivation for mechanistic thinking

Recently, two posts have been written seemingly arguing for two very different types of EA movement. A post by Gleb argues for "celebrating all who are in effective altruism" while a post by Diogenes argues that EA should not shy away from prioritizing elite recruitment. This post series will argue that both Gleb and Diogenes articulate important sentiments, and that EA should probably encompass both. But more importantly, it will argue for the importance of mechanistic thinking for movement design. As a scientifically-minded movement, we should be skeptical of proposals for what kind of movement we ought to build if they lack detailed mechanisms for achieving the aims of EA.

The magician's mechanism

By analogy, imagine that you are approached in the street by a magician. They proceed to pull a rabbit out of a hat.

You say, "I too would like to pull rabbits out of hats. How did you do that?"

They reply, "Magic."

You ask, "But how does magic work?"

They reply, "There is a one-step process: believe."

If your goal is to pull rabbits out of hats, you should be suspicious of the magician's proposal. "Believe" doesn't seem like a complex or plausible enough mechanism to get a rabbit out of a hat. Indeed, the magician's mechanism sounds suspiciously similar to many startup business plans that follow a structure like:

Step one: Create app
Step two: ???
Step three: Profit

Mechanistic movement design

It's easy to fall into debates about EA branding, outreach, and infrastructure, particularly where both sides argue for solutions that lack the following considerations:
  • What ultimate goal is the EA movement trying to achieve?
  • By what mechanisms can a movement plausibly achieve that ultimate goal?
  • How can we dial the sociological parameters of EA to instantiate those mechanisms?

In our case, the ultimate goal is something like "universal flourishing." So, movement design proposals should include an account of how some mechanism(s) bring us closer to that ultimate goal. If a proposal contains too many implicit pieces between "Step n" and "universal flourishing" that resemble "Step two: ???" then we should lower our confidence in those proposals.

Mechanistic proposals for movement design will often contain descriptions of causal chains for plausibly moving us toward the goal. It's important to keep in mind the conjunction fallacy when evaluating such proposals. Yet, all else equal, we should sooner trust a proposal that includes plausible mechanisms for victory over one that doesn't. (Especially if it includes scenario-planning.)

In summary, it's important to realize that implicit in every movement-building proposal is a theory about sociology and human behavior. Given how wrong a movement can go (cf communism), we should seek to apply at least as much careful thought to building a movement that we do in building an airplane. 

---

The ideal outcome of this post: When evaluating a proposal for EA movement design, ask, "By what concrete mechanism would this solution move us closer to universal flourishing?"

Below are all examples of proposal for which one could ask, "What is the mechanism for getting to universal flourishing?"

"EA should...

  • ...be more like an elite network."
  • ...be more like a mass-movement."
  • ...have higher barriers to entry."
  • ...should be more welcoming." 
  • ...accelerate growth."
  • ...slow down growth."
  • ...become more diversity-conscious."
  • ...become less diversity-conscious."
  • ...appeal more to emotion."
  • ...appeal less to emotion."
And so on.

10

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Agreed, I think any proposal for shifting the status quo should in the end answer the question of how does this address human flourishing and describe specific ways it proposes to get there. I would also add that, as a scientific-minded movement, we should use evidence to support the effectiveness of those steps, such as research studies, Fermi estimates, comparisons to other movements, etc. Likewise, as a rationality-minded movement, we should specifically orient to avoiding biases such as confirmation bias, conjunction fallacy, in-group bias, etc. when evaluating these proposals, using debiasing strategies such as consider the alternative, probabilistic thinking, etc.

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely
 ·  · 11m read
 · 
Epistemic status: I think you should interpret this as roughly something like “GenAI is not so powerful that it shows up in the most obvious way of analyzing the data, but maybe if someone did a more careful analysis which controlled for e.g. macroeconomic trends they would find that GenAI is indeed causing faster growth.” 1. Some people have asked me to consider going back to earning-to-give, and so I am considering founding another company. A major hesitation is that startups generally take 6 to 10 years to be acquired or go public. This is unfortunate for founders who believe that we are less than 10 years away from all human labor being automated. 2. It's possible that the advent of generative AI will make startups grow faster, which could provide a faster exit path. YCombinator CEO Garry Tan has publicly stated that recent YC batches are the fastest growing in their history because of generative AI. 3. However, I find that YCombinator companies have, if anything, been growing more slowly since the release of ChatGPT in 2022: 1. Of the 20 companies which had the highest 2 year growth post-YC, only 1 (Tennr) was a 2023+ batch company, even though 16% of the companies I could find 2 year growth data for were 2023+.[1] The average valuation of a YCombinator company two years after it goes through YC is only $13.3M for 2023+ batches, compared to $46.1M for <2023 batches.[2] 2. Of the 20 companies which had the highest 1 year growth post-YC, only 1 (Legora) was a 2023+ batch company, even though 34% of the companies I could find 1 year growth data for were 2023+. The average valuation of a YCombinator company one year after it goes through YC is only $7.9M for 2023+ batches, compared to $13.3M for <2023 batches. 4. These results are preliminary and subject to significant caveats: 1. Public data about the valuation of privately held companies is notoriously limited. My guess is that these results aren't incredibly off because the largest companie