Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey

People doing meta-EA sometimes jokingly frame their work as “I want to indoctrinate people into EA, sort of like what cults do, but don’t worry! Haha! What we do is fine because EA isn’t a cult.”

I think this is a harmful mindset. When thinking about how to get more people to be EA-aligned, I think skill mastery is a better model than cult indoctrination. 

Skill mastery often has these attributes: 

  • Repeated deliberate practice.
  • “Thinking about it in the shower”, i.e. thinking about it without much effort.
  • “Gears-level understanding”, i.e. knowing the foundations of the skill and understanding how all the pieces relate.

Cult indoctrination is more about gaslighting people into believing that there can be no other truth than X. They do this by repeating talking points and creating closed social circles. 

Accordingly, when thinking about how to get more people to be EA-aligned, here are some good questions to ask: 

  • Can we build structures which enable repeated deliberate practice of EA? Good examples are intro fellowships, book recommendations, and club meetings. Are there more?
  • Can we get people to “think about EA in the shower”? One way to improve this could be to provide better-written reading materials which pose questions which are amenable to shower thoughts. 
  • Can we encourage more “gears-level understanding” of EA concepts? For example, emphasize the reasons behind x-risk calculations rather than their conclusions. 

It is also probably a bad idea for EA to resemble a cult, because cults have bad epistemics. Accordingly, here are some paths to avoid going down: 

  • Repeating talking points: when discussing EA topics with a skeptical non-EA, don’t repeat standard EA talking points if they’re not resonating. It is useless to say “AI Safety is a pressing problem because superintelligent AGI may pose existential risk” if they do not believe superintelligent AGI could ever possibly exist. Instead, you can have a more intellectually honest conversation by first understanding what their current worldview and model of AI is, and building off of this. In other words it is important to adopt good pedagogy: building from the student's foundations, rather than instructing them to memorize isolated facts.
  • Closed social circles: for example, in the setting of a university group, it is probably a bad idea to create an atmosphere where people new to EA feel out of place. 

The central idea here is that promoting gears-level understanding of EA concepts is important. Gears-level understanding often has repeated deliberate practice and shower thoughts as a prerequisite, so skill mastery and gears-level understanding are closely related goals.

I would rather live in a world with people who have their own sound models of x-risk and other pressing problems, even if they substantially differ from the standard EA viewpoint, than a world of people who are fully on board with the standard EA viewpoints but don’t have a complete mastery of the ideas behind them.

Summary: People who try to get more people to be EA-aligned often use techniques associated with cult indoctrination, such as repeating talking points and creating closed social circles. Instead, I think it is more useful to think about EA-alignment as a skill that a person can master. Accordingly, good techniques to employ are repeated deliberate practice, "thinking about it in the shower", and promoting gears-level understanding. 

Comments8
Sorted by Click to highlight new comments since:
Mau
26
0
0

Thanks! Seems like a useful perspective. I'll pick on the one bit I found unintuitive:

Summary: People who try to get more people to be EA-aligned often use techniques associated with cult indoctrination, such as repeating talking points and creating closed social circles.

In the spirit of not repeating talking points, could you back up this claim, if you meant it literally? This would be big if true, so I want to flag that:

  • You state this in the summary, but as far as I can see you don't state/defend it anywhere else in the post. So people just reading the summary might overestimate the extent to which the post argues for this claim.
  • I've seen lots of relevant community building, and I more often see the opposite: people being such nerds that they can't help themselves from descending into friendly debate, people being sufficiently self-aware that they know their unintuitive/unconventional views won't convince people if they're not argued for, and people pouring many hours into running programs and events (e.g. dinners, intro fellowships, and intro-level social events) aimed at creating an open social environment.

(As an aside, people might find it interesting to briefly check out YouTube videos of actual modern cult tactics for comparison.)

When I say "repeating talking points", I am thinking of: 

  1. Using cached phrases and not explaining where they come from. 
  2. Conversations which go like
    • EA: We need to think about expanding our moral circle, because animals may be morally relevant. 
    • Non-EA: I don't think animals are morally relevant though.
    • EA: OK, but if animals are morally relevant, then quadrillions of lives are at stake.

(2) is kind of a caricature as written, but I have witnessed conversations like these in EA spaces. 

My evidence for this claim comes form my personal experience watching EAs talk to non-EAs, and listen to non-EAs talk about their perception of EA. The total number of data points in this pool is ~20. I would say that I don't have exceptionally many EA contacts, compared to most EAs, but I do particularly make an effort to seek out social spaces where non-EAs are looking to learn about EA. Thinking back on these experiences, and what conversations went well and what ones didn't, is what inspired me to write this short post.

Ultimately my anecdotal data can't make any statistical statements about the EA community at large. The purpose of this post is to more describe two mental models of EA alignment and advocate for the "skill mastery" perspective. 

Mau
15
0
0

I think both (1) and (2) are sufficiently mild/non-nefarious versions of "repeating talking points" that they're very different from what people might imagine when they hear "techniques associated with cult indoctrination"--different enough that the latter phrase seems misleading.

(E.g., at least to my ears, the original phrase suggests that the communication techniques you've seen involve intentional manipulation and are rare; in contrast, (1) and (2) sound to me like very commonplace forms of ineffective (rather than intentionally manipulative) communication.)

(As I mentioned, I'm sympathetic to the broader purpose of the post, and my comment is just picking on that one phrase; I agree with and appreciate your points that communication along the lines of (1) and (2) happen, that they can be examples of poor communication / of not building from where others are coming from, and that the "skill mastery" perspective could help with this.)

Many domains that people tend to conceptualize as "skill mastery, not cult indoctrination" also have some cult-like properties like having a charismatic teacher, not being able to question authority (or at least, not being encouraged to think for oneself), and a social environment where it seems like other students unquestioningly accept the teachings. I've personally experienced some of this stuff in martial arts practice, math culture, and music lessons, though I wouldn't call any of those a cult.

Two points this comparison brings up for me:

  • EA seems unusually good compared to these "skill mastery" domains in repeatedly telling people "yes, you should think for yourself and come to your own conclusions", even at the introductory levels, and also just generally being open to discussions like "is EA a cult?".
  • I'm worried this post will be condensed into people's minds as something like "just conceptualize EA as a skill instead of this cult-like thing". But if even skill-like things have cult-like elements, maybe that condensed version won't help people make EA less cult-like. Or maybe it's actually okay for EA to have some cult-like elements!

Hi :) I'm surprised by this post. Doing full-time community building myself, I have a really hard time imagining that any group (or sensible individual) would use these 'cult indoctrination techniques' as strategies to get other people interested in EA.

Was wondering if you could share anything more about specific examples / communities where you have found this happening? I'd find that helpful for knowing how to relate to this content as a community builder myself! :-) 


(To be clear, I could imagine repeating talking points and closed social circles happening as side effects of other things - more specifically of individuals often not being that good at following what a good argument is and therefore repeating something that seems salient to them, and of people naturally creating social circles with people they get along with. My point is that I find it hard to believe that any of this would be deliberate enough that this kind of criticism really applies! Which is why I'd find examples helpful - to know what we're specifically speaking about :) ) 

I should clarify—I think EAs engaging in this behavior are exhibiting cult indoctrination behavior unintentionally, not intentionally. 

One specific example would be in my comment here.

I also notice that when more experienced EAs tend to talk to new EAs about x-risk from misaligned AI, they tend to present an overly narrow perspective. Sentences like "Some superintelligent AGI is going to grab all the power and then we can do nothing to stop it" are thrown around casually without stopping to examine the underlying assumptions. Then newer EAs repeat these cached phrases without having carefully formed an inside view, and the movement has worse overall epistemics. 

Here is a recent example of an EA group having a closed off social circle to the point where a person who actively embraces EA has difficulty fitting in. 

Haven't read the whole post yet but the start of Zvi's post here lists 21 EA principles which are not commonly questioned. 

I am not going to name the specific communities where I've observed culty behavior because this account is pseudoanonymous.

"creating closed social circles"

Just on this my impression is that more senior people in the EA community actively recommend not closing your social circle because, among other reasons, it's more robust to have a range of social supports from separate groups of people, and it's better epistemically not to exclusively hang out with people who already share your views on things.

Inasmuch as people's social circles shrink I don't think it's due to guidance from leaders (as in a typical cult, I would think) but rather because people naturally find it more fun to socialise with people who share their beliefs and values, even if they think that's not in their long-term best interest.

I like the "skill-mastery" framing, and have both "think about it in the shower" and "gears-level mastery" as orientations in my thinking. I didn't have deliberate practice cached as much, nor the cluster of the three, but I think it's good and reminds me of the way the rationality community talks about the need for rationalist dojos and practice to actually become more rational.

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
Recent opportunities in Building effective altruism