Hide table of contents

The Effective Altruism website defines EA as: "We use evidence and careful analysis to find the very best causes to work on." The Introduction to Effective Altruism post in our forum also says: "It is a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible."

So I guess this is more or less considered the definition of EA. But as I read more about EA, I am beginning to feel like this definition may be insufficient. It looks like the EA focus splits across two schools of thought - Evidence-based giving and hits-based giving. But this definition seems like it is all about Evidence-based giving. It feels like the 'GiveWell-ness' of it all is represented but what about the 'OpenPhil-ness'?

This exclusion of hits-based giving from the definition seems problematic since 80000hours.org (one of the top 5 ways through which people actually find EA) considers Expected Value thinking (the foundation of Hits Based giving if I understand it correctly) as one of the key ideas of EA. But then you see the definition and it is not really there. In addition, the incompleteness of the definition could also make it difficult for someone to see why EA does GCR work, in my opinion. Please correct me if I am wrong but it feels like GCRs doesn't necessarily have high-quality evidence for why we should work on it but Expected Value thinking is what really makes it worth it.

UPDATE:

I had only mentioned two sources of definitions above. But there could be more that I may have missed. If you know of more please mention them in the comments/answers and I will add them to this list:

  1. Defining Effective Altruism by William_MacAskill. Thanks to Davidmanheim for bringing this up in the answer here. The definition given in Will's post is:

Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.

44

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

One, I'd argue that hits-based giving is a natural consequence of working through what using "high-quality evidence and careful reasoning to work out how to help others as much as possible" reallying means, since that statement doesn't say anything about excluding high-variance strategies. For example, many would say there's high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.

Two, it's likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.

Googling, I primarily find the term "high-quality evidence" in association with randomised controlled trials. I think many would say there isn't any high-quality evidence regarding, e.g. AI risk.

3
Davidmanheim
Agreed - see my answer which notes that Will suggested a phrasing that omits "high-quality."
  1. The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evide

... (read more)

First, I don't think that's the best "current" definition. More recently (2 years ago,) Will proposed the following
 

Effective altruism is:

(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and

(ii) the use of the findings from (i) to try to improve the world.


But Will said he's "making CEA’s definition a little more rigorous," rather than replacing it. I think the key reason to allow hits-based giving in both cases is the word "and" in the phrase "...evidence and careful reasoning." (Note that Will omits "high quality" from evidence, I'd suspect for the reason you suggested. I would argue that for a Bayesian, high-quality evidence doesn't require an RCT, but that's not the colloquial usage, so I agree Will's phrasing is less likely to mislead.)

And to be fair to the original definition, careful reasoning is exactly the justification for expected value thinking. Specifically, careful reasoning  leads to favoring making 20 "hits based" donations to high-risk-of-failure potential causes, where in expectation 10% of them end up with a cost per QALY of $5, and the others end up useless, rather than a single donation 20x as large to an organization we are nearly certain has a cost per QALY of $200. 

Thanks for bringing up Will's post! I have now updated the question's description to link to that.

I actually like Will's definition more. The reason is two-fold:

  1. Will's definition adds a bit more mystery which makes me curious to actually work out what all the words mean. In fact, I would add this to the list of "principal desiderata for the definition" the post mentions: The definition should encourage people to think about EA a bit deeply. It should be a good starting point for research.
  2. Will's definition is not radically different from what is already
... (read more)
4
Davidmanheim
I actually disagree with your definition. Will's definition allows for debate about what counts as evidence and careful reasoning, and whether hits based giving or focusing on RCTs is a better path. That ambiguity seems critical for capturing what EA is, a project still somewhat in flux and one that allows for refinement, rather than claiming there are 2 specific different things. A concrete example* of why we should be OK with leaving things ambiguous is considering ideas like the mathematical universe hypothesis (MUH). Someone can ask; "Should the MUH be considered as a potential path towards non-causal trade with other universes?"  Is that  question part of EA? I think there's a case to make that the answer is yes (in my view correctly,) because it is relevant to the question of revisiting the "tentatively understanding" part of Will's definition. *In the strangest sense of "concrete" I think I've ever used.
3
Venkatesh
I both agree and disagree with you. Agreements: * I agree that the ambiguity in whether giving in a hits-based way or evidence-based way is better, is an important aspect of current EA understanding. In fact, I think this could be a potential 4th point (I mentioned a third one earlier) to add to the definition desiderata: The definition should hint at the uncertainty that is in current EA understanding. * I also agree that my definition doesn't bring out this ambiguity. I am afraid it might even be doing the opposite! The general consensus is that both experimental & theoretical parts of the natural sciences are equally important and must be done. But I guess EAs are actually unsure if the evidence-based giving & careful reasoning-based giving (hits based) should both be done or if we would be doing more good by just focussing on one. I should possibly read up more on this. (I would appreciate it if any of you can DM me any resources you have found on this) I just assumed EAs believed both must be done. My bad! Disagreement: I don't see how Will's definition allows for debating said ambiguity though. As I mentioned in my earlier comment, I don't think that the definition distinguishes between the two schools of thought enough. As a consequence, I also don't think it shows the ambiguity between them. I believe a conflict(aka ambiguity) requires at least two things but the definition actually doesn't convincingly show there are two things in the first place, in my opinion.

I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:

Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.

Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?

Ben Todd: Yeah.

Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.

Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.

I don't think this is an "official definition" (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.

Thanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit.

I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.

1
Prabhat Soni
Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P
Comments3
Sorted by Click to highlight new comments since:

You could run a survey on which school of thought people associate those phrases with. And you could do the same for alternative phrases.

For evaluating the definition of EA we would only want people who don't know much about EA. So we would need a focus group of EA newcomers and ask them what the definition means to them. Does that sound right?

Yeah or just ask people on Mechanical Turk or similar. (You could ask if people have already heard about EA and see if that makes a difference.)

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that