Introduction
- I have some concerns about the "effective altruism" branding of the community.
- I recently posted them as a comment, and some people encouraged me to share them as a full post instead, which I'm now doing.
- I think this conversation is most likely not particularly useful or important to have right now, but there's some small chance it could be pretty valuable.
- This post is based on my personal intuition and anecdotal evidence. I would put more trust in well-run surveys of the right kinds of people or other more reliable sources of evidence.
"Effective Altruism" sounds self-congratulatory and arrogant to some people:
- Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
- Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals in relevant areas. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.
"Effective altruism" sounds like a strong identity:
- Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
- Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
- In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.
Some further, less important points:
- "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
- A lot of people don't know what "altruism" means.
- "Effective altruism" often sounds pretty awkward when translated to other languages. That said, this issue also affects a lot of the alternatives.
- We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
- Taking action on EA issues doesn't strictly require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if they don’t strongly self-identify as altruists. Some may be mostly interested in contributing to the intellectual aspects without making large personal sacrifices.
- There was a careful process where the name of CEA was determined. However, the adoption of the EA label for the entire community happened organically and wasn’t really a deliberate decision.
Some thoughts on potential implications:
- The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc., and the EA brand might plausibly be unattractive for some of these people. If that's true, the EA brand might act as a cap on EA's long-term growth potential, so we should perhaps aim to de-emphasize it. Or at least do some marketing research on whether this is indeed an issue.
- EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.
- I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.
Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.
This sounds very right to me.
Another way of putting this argument is that "global priorities (GP)" community is both more likable and more appropriate than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense.
More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big probl... (read more)
I really liked this comment, thanks!
The current discussion in the comments seems quite centered on "effective altruism vs. global priorities". I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than "global priorities". In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.
This seems like a really good point.
Off the top of my head, I think how we[1] should proceed is something like:
I think the first three actions can/should be done roughly in parallel, and that the fourth should mostly wait till we've done the first three. Or we might iterate through "first three actions, then fourth action, then first three actions again ..." a few times.
And I'd say this is best done through one or more well-run surveys, as you suggest. Maybe there could first be surveys that ask EAs to generate ideas for labels, goals/criteria, and broader approaches, then ask them to rate given ideas and approaches against given goals/criteria (or maybe that should be split into a followup survey). And then there could be surveys of non-EAs that just skip to that last step (since I imagine it'd be hard for them to come up with useful ideas without context first).
[1] I'm not sure who the relevant "we" is.
I think a name change might be good, but am not very excited about the "Global Priorities" name. I expect it would attract mostly people interested in seeking power and "having lots of influence" and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.
I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).
While I'm not sure we're using terms like "political" and "power" in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one's noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren't something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I'm not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I'm not sure if Plato says that, and I'm confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is b... (read more)
I agree that changing names is hard and costly (you can't do it often), something that definitely should be taken into account.
I'm noticing I don't fully understand the way in which you think "Global Priorities" would attract power-seekers, or what you mean by that. Like, I have a vague sense that you're probably right, but I don't see the direct connection yet. Would be very interested in more elaboration on this.
I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don't also strongly select for merit seem filled with them.
Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I've gone to things that sounded like "Global Priorities" seminars, I mostly met lots of people with a political science degree, or MBA's, really focusing on how they can acquire more power and the whole conversation being very status oriented.
I think this is a good point. That said, I imagine it's quite hard to really tell.
Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences.
Interesting.
1) I'm convinced that a "GP" community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today's community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect - can you explain?
2) assuming that "GP" is too intrinsically political, can you think of any alternatives that have some of its advantages of "GP" without that disadvantage?
I'm concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I'm in favour of people gaining influence, all things equal!
I think the "global priorities" label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they're trying to figure out global priorities. If I heard of a global priorities forum or conference, I'd expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they'd seem pretty self-aggrandizing.
The "priorities" part may also suggest to others that they're not a priority. I expect "the global priorities movement has decided that X is not a priority" seems just as unpleasant to people pursuing X as "the effective altruism movement has decided that X is not effective".
Lastly, "effective altruism" to me suggests both figuring out what to do, and then doing it. Whereas "global priorities" only has connotations of the former.
Well, my default opinion is that we should keep things as they are; I don't find the arguments against "effective altruism" particularly persuasive, and name changes at this scale are pretty costly.
Insofar as people want to keep their identities small, there are already a bunch of other terms they can use - like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying "I'm part of the effective altruism community" softens the term a bit.
This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don't identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it'd be good to hear more from people who have reservations about identifying as an effective altruist. I've just made a top-level question about it, plus an anonymous version - if that describes you, I'd be interested to see your responses!
Great comment. To these points I would also add (or maybe just summarize some of the points you made) that "global priorities" seems to have more empirical/world-focused connotations to me, whereas "effective altruism" sounds a lot more philosophical/ideological to me.
E.g. I agree that "global priorities" suggests questions like "what are the big challenges of our time?", which I like a lot more than e.g. "how altruistic should we be?", "is there something like 'true altruism'?" or whichever other thing "effective altruism" makes people first think of.
Of course, I agree that ultimately the project of doing as much good as we can involves both empirical and philosophical questions. But relative to today, I think we'd be better equipped to execute that project well with a stronger emphasis on empirical and practical questions and less emphasis on abstract philosophy. (Though to be fair to the EA label, the status quo is more due to founder effects rather than due to the name differentially attracting philosophers.)
It seems worth noting that all of those orgs/wikis are focused on producing or collecting research, not on more directly acting on the world. This is of course a key part of EA, but not the whole of it.
In line with that, I think that "global priorities", "global priorities community", or similar terms sound like they're mostly about working out what the global priorities are and less about actually acting on those answers. EA is already often perceived as too research-focused (though I'm not saying I agree with those perceptions myself), so it might be good to avoid things that would exacerbate that.
I like this style of thinking, but I don't think it pushes in the direction that you suggest. EA entities with "priorities" in the name disproportionately work on surveys and policy, whereas those with "EA" in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.
On balance, I think "global priorities" connotes more concreteness and action-orientation than "EA", which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.
A friend's "names guy" once suggested calling the EA movement "Unfuck the world"...
We can begin here.
I was just reflecting on the term 'global priorities'. I think to me it sounds like it's asking "what should the world do", in contrast to "what should I do". The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.
For example, I think the question "what should the global priority be on helping developing countries" is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, ... (read more)
I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.
Though you can just say "I'm interested in / I work on global priorities / I'm in the prioritisation community", or anything that you would say about the AI safety community, for example.
It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.
Yeah, I'm much more sympathetic to concerns with "effective altruist" than with "effective altruism", and it doesn't seem like GP does any better in that regard – all the solutions you could apply here ("I'm a member of the global priorities community", "I'm interested in global priorities") also apply to EA.
Maybe the fact that the short forms are so awkward for GP is part of the idea? Like, EA has this very attractive but somewhat problematic personalised form ("effective altruist"); GP's personalised forms are all unattractive, so you avoid the problematic attractor?
But it still seems that, if personalised forms are a big part of the concern (which I think they are), this is a good argument in favour of keeping looking. Which was Jonas's proposal anyway.
(Or, of course, we could cut the arrogance down by just saying "I'm an early-career aspiring global priority.")
I asked my team about this, and Sky provided the following information. This quarter CEA did a small brand test, with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”
Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)
Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. Maybe "EA" puts off a small-but-important subsection of the audience we tested on (e.g. unusually critical/free-thinking people).
I don't think this is dispositive - I think that testing other brands might still be a good idea. We're currently considering trying to hire someone to test and develop the EA brand, and help field media enquiries. I'm grateful for the work that Rethink and Sky Mayhew have been doing on this.
I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".
I wonder which of these questions is most important in selecting a name.
I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.
However, I want to defend it a little:
I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.
Empirical research on people's responses to the term (and alternative terms) certainly seems valuable, and important to do before any potential rebrand.
Anecdotally, I find that people hate reference to "priorities" or "prioritising" as much or more than they hate "effective altruism." Referring to specific "global priorities" quite overtly implies that other things are not priorities. And terminology aside, I find that many people outright oppose "prioritisation" in the field of philanthropic or pro-social endeavours for roughly this reason: it's rude/inappropriate to imply that certain good things that people care about are more important than others. (The use of the word "global" just makes this even worse: this implies that you don't even just think that they are local or otherwise particular priorities, but rather that they are the priorities for everyone!)
To some extent, I think that what those who dislike effective altruism dislike isn't that term, but rather the set of ideas it expresses. As such, replacing it with another term that's supposed to express broadly the same set of ideas (like "priorities" or "global priorities") might make less of a difference than one might think at first glance (though it likely makes some difference).
What might make a greater difference, for better or worse, is choosing a term that expresses a quite different set of ideas. E.g. I think that people have substantially different reactions to the term "longtermism".
+1. A short version of my thoughts here is that I’d be interested in changing the EA name if we can find a better alternative, because it does have some downsides, but this particular alternative seems worse from a strict persuasion perspective.
Most of the pushback I feel when talking to otherwise-promising people about EA is not really as much about content as it is about framing: it’s people feeling EA is too cold, too uncaring, too Spock-like, too thoughtless about the impact it might have on those causes deemed ineffective, too naive to realise the impact living this way will have on the people who dive into it. I think you can see this in many critiques.
(Obviously, this isn’t universal; some people embrace the Spock-like-mindset and the quantification. I do, to some extent, or I wouldn’t be here. But I’ve been steadily more convinced over the years that it’s a small minority.)
You can fight this by framing your ideas in warmer terms, but it does seem like starting at ‘Global Priorities community’ makes the battle more uphill. And I find losing this group sad, because I think the actual EA community is relatively warm, but first impressions are tough to overcome.
Low confidence on all of the above, would be happy to see data.
I still think "effective altruism" sounds a bit more like we've already found the correct answer to "what should we prioritize" rather than just being interested in the question, but I agree these are some good points.
It seems like EA could benefit from a dedicated, evidence-based messaging consultancy that served all EA orgs.
Rethink Priorities is pretty close to this! We've done message testing now for many orgs across cause areas... Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.
We'd be happy to consider doing more work for other people in EA and the EA movement as a whole!
Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).
There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don't think any of the candidates have produced widespread reactions of "oh yeah I wish we'd thought of that in 2012", even among people who see problems with the current name. So coming up with a name that's better than "effective altruism", by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don't think that fully explains what's going on here.) When people ... (read more)
This is a discussion that has happened a few times. I do think that 'global priorities' has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.
I'd still be reluctant to ditch 'effective altruism' entirely. There is an important part of the original message of the movement (cf pond analogy) that's about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we've probably developed a healthier sense of how to balance that ('altruism/life balance') but it feels like 'global priorities' wouldn't cover it.
This is an excellent point. I "joined" EA because of the pond idea. I found the idea of helping a lot of people with the limited funds I could spare really appealing, and it made me feel like I could make a real difference. I didn't get into EA because of its focus on global prioritization research.
Of course, what I happened to join EA because of is not super important, but I wonder how others feel. Like EA as a "donate more to AMF and other effective charities" is a really different message than EA as "research and philosophize about what issues are really important/neglected."
I'm not sure which EA is anymore, and changing the name to global priorities might change the movement from the Doing Good Better movement to the "Case for Strong Longtermism" movement and those are very different. But I'm very uncertain on which EA will/should end up as.
I think I disagree with this given what the community currently looks like. (This might not be the best place to get into this argument, since it's pretty far from the original points you were trying to make, but here we go.)
Two points of disagreement:
i) The EA Survey shows that current donation rates by EAs are extremely low. From this I conclude that there is way too little focus on personal donations within the EA community. That said, if we get some of the many EAs which are donating very little to work on the suggestions you mention, that is plausibly a net improvement, as the donation rates are so low anyway.
Relatedly, personal donations are one of the few things that everyone can do. In the post, you write that "The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc.", but as I understand the terms you... (read more)
Edit: I think my below comment kind of misses the point – my main response is simply: Some people could probably do a huge amount of good by, e.g., helping increase meat alternatives R&D budgets, this seems a much bigger opportunity than increasing donations and similarly tractable, so we should focus more on that (while continuing to also increase donations).
--
Some quick thoughts:
- I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.
- Low donation rates indeed feel concerning. To me, the l
... (read more)Thanks for stating your view on this as I would guess this will be a crux for some.
FWIW, I'm not sure if I agree with this. I certainly agree that there is a real risk from 'dilution' and other risks from both too rapid growth and a too large total community size.
However, I'm most concerned about these risks if I imagine a community that's kind of "one big blob" without much structure. But that's not the only strategy on the table. There could also be a strategy where the total community is quite large but there is structure and diversity within the community regarding what exactly 'being an EA' means for people, who interacts with whom, who commands how many resources, etc.
I feel like many other professional, academic, or politi... (read more)
A small and simple change that CEA could do is to un-bold the 'Effective' in their 'Effective Altruism' logo which is used on https://www.effectivealtruism.org/ and EAG t-shirts
I find the bold comes across as unnecessarily smug emphasis in Effective Altruism.
It's not just that it has developed in that direction, it has developed in many directions. Could the solution then be to use different brands in different contexts? "Global priorities community" might work better than "Effective Altruism community" when doing research and policy advocacy, but as an organizer of a university group, I feel like "Effective Altruism" is quite good when trying to help (particularly smart and ambitious) individuals do good effectively. For example, I don't think a "Global priorities fellowship" sounds like something that is supposed to be directly useful for making more altruistic life choices.
Outreach efforts focused on donations and aimed at a wider audience could use yet another brand. In practice it seems like Giving What We Can and One for the World already play this role.
Thanks for writing this, Jonas.
For what it's worth:
While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here?
My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like:
I wonder if some of what is underpinning these discussions is less the accuracy or branding issues of particular names and more the difficulty of coordinating a growing community?
As the number of people interested in the ideas associated with effective altruism grows, more people enter the space with different values and interpretations of the various ideas. It becomes harder for everyone to get what they wanted from the community and less likely th... (read more)
New post that's related to this (just discovered it now): https://forum.effectivealtruism.org/posts/o5ChDMcooDFG8cfPJ/why-i-prefer-effective-altruism-to-global-priorities
Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.
Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.
Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).
Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.
Small note that this could also be counter evidence -
these are folks that are doing a good job of 'keeping their identity small' yet are also interested in gathering under the 'effective altruism' banner. (edit: nevermind, seems like they identified with other -isms) .Somehow the EA brand is threading the needle of being a banner and also not mind-killing people ... I think.
Would EA be much worse if we removed the 'banner' aspect of it? I don't know... it feels like we're running an experiment of whether it's possible to nurture and grow global prioritist qualities in the world (in people who might not have otherwise done much global prioritism, without a banner/community to help them get started). It's not clear if we're done with that experiment - if anything, initial results look promising from where I'm sitting. So my initial thought is that I don't quite want to remove the banner variable yet (but then again maybe Global Priorities could keep that variable)
Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.
A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:
If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.
In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.
This makes a lot of sense to me if there's a cap on donations due to branding, especially for the neartermist funds and if you create a legible LTF fund, then that as well.
How big of a priority is it for the EA Funds plan to grow the donor base to non-EA donors, and on what time scale?
Great post.
Has this debate evolved? Did someone try to give the 10 names?
I like efficient altruism, it drops the smugness a bit.
Neoutilitariansm could also make sense. But maybe someone who understands EA better than me points out the differences between what EA has been and utilitarianism.
Change now after 10 years can be really really difficult. But the best time is as soon as possible. Also it is difficult because EA is not a single organization or exact philosophy with one person behind it.
I usually say "I admire/follow the Effective Altruism community" rather than saying I am an Effective Altruist.