“Expected Value Altruism” might be a less catchy name than Effective Altruism, but it would be a more precise and less antagonizing name for a movement whose most distinguishing characteristic is not effectiveness broadly, but an expected-value-calculating approach to making the world better.

Most people want to be effective at achieving their goals, even if these are to help others. So, it doesn’t seem a self-proclaimed interest in effectiveness is what divides EAs from altruistic non-EAs. Part of the divide could be over how much emphasis to place on effectiveness, but there are altruists who are very interested in effectiveness and impact but do not identify as EAs. The most apparent general disagreement between them and EAs would be, I think, over the methods for determining what altruistic actions to take. EAs and EA orgs claim to use expected value methodology occasionally or often to determine the course of their altruism, while non-EAs and non-EA orgs either do not use these at all, or put much less emphasis on them. If this is what primarily distinguishes EAs from non-EA altruists, highlighting this by having it in the name of the movement could help in a few ways.

Effective Altruists attempt to be effective in their pursuit of doing good, so EA is not a total misnomer, but EAs have this specific expected-value-centric theory of effectiveness, and it might not be the only defensible one. Even if it were, plenty of people think there are other potentially effective approaches to altruism, and the name “effective altruism” begs the question against them by implicitly asserting what needs to be argued for: that expected value methodology is the most effective or only possibly effective approach to altruism.

“Expected Value Altruism” doesn’t do this. It directly points to the group’s methodology without implicitly asserting that it is the best or only effective way to be altruistic. An Expected Value Altruist certainly could believe that expected value methodology is the only possibly effective approach to doing good, but the name itself doesn’t imply anything like that. And, because “Expected Value Altruism” immediately flags this potential source of disagreement between EVAs and their detractors, the self-identified Expected Value Altruists automatically put themselves in the position of having to defend the advantages of expected value approaches to altruism. Along with inspiring more reflection about—and a more honed ability to articulate—the advantages and disadvantages of applying expected value methodology to our effects on the world, this could also inspire more altruistic non-EVAs to think about and explain why they don’t see the need for expected value methodology.

In contrast, the name “effective altruism” potentially obscures the key methodology in dispute by burrowing it under the broader concept of “effectiveness,” which could lead to unnecessary confusion and even hostility because pretty much everyone endorses some form of effectiveness in their pursuit of the good, and it can be offensive to think your opponent is calling you a champion of ineffectiveness.

Another advantage “Expected Value Altruism” has over “Effective Altruism” is that it is more accurate—not just for being more precise, but because all EAs should (and plenty do) recognize that even if it were true that expected value methodology was the best tool we have for trying to be effective, we can’t be sure it actually will lead to the most effective altruism. Effectiveness can be achieved without consciously pursuing it, and a conscious pursuit of effectiveness can have worse results than acting more intuitively. It could turn out that by some unforeseeable chain of events, donating to the Make-a-Wish Foundation would have led to a better world than donating to the Against Malaria Foundation did, even though no plausible expected value calculation would ever suggest this. Perhaps expected value methodology is the best strategy we currently know of for trying to be effective, but this doesn’t mean those who employ it will necessarily be the most effective and that those who don’t will necessarily be the least effective.

Above-average effectiveness can be achieved accidentally; employing expected value methodology, however, cannot. “Effective Altruism” could be a misnomer if the movement backfires and destroys the world, or even if it improves the world less effectively than other approaches. It’s hard to see how “Expected Value Altruism” could ever be a misnomer unless EAs exaggerate how much they rely on expected value methodology or unless the majority of EAs who indeed use it later shift away from it. In short, the effectiveness of Effective Altruism is debatable. Much less debatable is that EAs and EA orgs (at least purport to) use expected value methodology to guide their altruistic actions.

Are there any possible downsides to rebranding Effective Altruism as Expected Value Altruism? I can think of a few. 

One is that it might boost commitment to expected value methodology despite any possible pitfalls that methodology might have, and it could also encourage performative expected value calculations that serve no purpose other than demonstrating one’s rightful place in Expected Value Altruism. However, I suspect there’s already a performative aspect to some of the expected value appeals in effective altruism. Plus, putting “Expected Value” in the name of the movement might result in a more serious and re-occurring debate over how to determine the expected value of different possible actions in an increasingly chaotic world of black swans, unknown unknowns, and generally just too many potentially relevant factors to feasibly cram into in such calculations. The renaming could lead to more humility about the reliability of applying expected value methodology to altruism rather than inflating its importance.

A second potential problem is that everyone has some concept of what effective means, but that’s not true of expected value, so the concept of “Expected Value Altruism” would tend to demand more up-front explanation than “Effective Altruism” does. But this is another advantage in disguise. The familiarity of “effective” is part of the problem with “Effective Altruism,” because it can promote a false sense of mutual understanding over what Effective Altruism entails and what it implies about non-EA approaches to altruism, which can lead to confusion and conflict.

The most serious problem I’ve thought of is the practical issue of all the paperwork and of convincing everyone who is used to calling it "Effective Altruism" to instead call it "Expected Value Altruism." I agree it’s a challenge. I still call CEEALAR “The EA Hotel.” My suggestion is for people who prefer “Expected Value Altruism” to call it that informally. Maybe it will eventually make sense to do an official re-naming. 

18

2
15

Reactions

2
15

More posts like this

Comments2


Sorted by Click to highlight new comments since:

FYI: There has been extensive discussion on renaming EA before, quite a few people don't think the current name is ideal, but no-one else found a more convincing name either, so at this point it seems very unlikely to me that EA will be renamed.

Re "expected value", I agree that it might sound less antagonistic / arrogant, but even fewer people would have any idea what this is about, and while it may appeal to some (nerdy) people more, I think more people would find it less appealing. Open to be convinced otherwise. 

More from Rhyss
Curated and popular this week
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region
 ·  · 10m read
 · 
The Strategic Animal Funding Circle just finished its first two rounds of grantmaking, and we ended up giving ~$900,000 across 16 organisations. Below is some brief reasoning behind why we were excited about these grants and broad factors that resulted in us not granting to other organisations. Overall we were pretty excited about the quality of applications we got and feel optimistic the circle will continue to run/deploy more in the future. At the bottom of this post, you can find more information about the next round, how to apply and how to join as a donor.  Top four reasons we ruled out applications Unclear theory of change The most common reason an organisation was ruled out was for unclear theory of change. This has come up with other funding circles’ explanation copied from a prior writeup we made “Some applicants do not share sufficient reasoning on how their project (in the end) contributes to a better world. Other applicants have a theory of change which seems too complex or involves too many programs. We generally prefer fewer programs with a more narrow focus, especially for earlier-stage projects. Other ToCs simply seem like an inaccurate representation of how well the intervention would actually work. As a starting point, we recommend Aidan Alexander’s post on ToCs.” We particularly saw this challenge with research projects and political projects.  Lack of strong plan, goals or evidence for why a group may achieve success. The groups focused too much on WHAT they wanted to achieve and insufficiently on HOW they planned to achieve this. In the future, we recommend that applicants elaborate on what are their SMART goals, what their plan is to achieve them or what other evidence exists that they will achieve what they planned for example, showing their track record or effectiveness of the intervention in general. This will enable us to judge and build confidence in their ability to execute the project and therefore increase our interest in funding