Hide table of contents

In this post I am not providing any argument that EA should become much larger and more welcoming. Others have made that argument much better than I would, and in some of this forum's most upvoted posts.[1] As an outsider to EA I simply observe that the movement is trapped in certain patterns of thinking, and these patterns limit the vision many EAs have of what the movement could become.

This is just one high-level vision of what a much larger, more welcoming EA could look like, and what it could accomplish.

There are many ways EA could go about experimenting with a more welcoming movement without changing the core of what EA is now. The example below is not a specific recommendation, but a thought experiment to make the points concrete.

Thought Experiment: A Fictional Large and Welcoming EA Organization

The EA leadership has decided that they want to experiment with a more welcoming EA movement in order to learn and gather data. In order to do this they form the Center to Advance Altruism (CAA). CAA’s charter is to increase positive impact by engaging with, and improving, the larger altruism ecosystem.

CAA retains the core EA philosophy of doing good more effectively, measuring impact wherever possible, considering the wellbeing of future people, etc.. But apart from this core philosophy, CAA is given full license to depart from the established culture, norms, and practices of the current EA movement.

Core to the CAA’s mission are the following three assumptions:

  1. There is enormous impact to be had by engaging with the overall altruism ecosystem, learning from it, and helping it become more effective.[2]
  2. There are a huge number of individuals around the world that want to have a positive impact, but that are a poor fit for the current EA culture and practices. There is enormous impact to be had by supporting, influencing, and mobilizing these altruistically-minded individuals.
  3. A focus on simply doing good more effectively, rather than maximizing it, welcomes a much larger group of people and avoids counterproductive optimization debates.[3]

What Could Be Accomplished Through Our Fictional Organization

EA could take a leadership position in the philanthropic world

It is often pointed out that, even with its influx of funds, how small EA is in the overall philanthropic ecosystem. However, what is less discussed is how much of a force multiplier it would be if EA assumed a leadership position within that ecosystem and helped steer it. With the right mentality and strategy, EA could be the rudder that helps turn the battleship.

  • EA could influence other philanthropic efforts as an ally, not an adversary - When EA simply stands in judgement of other philanthropic efforts, it robs the movement of influence with those efforts. If instead EA took the posture of "we're on your side, we just want to provide value and help you operate more effectively" with efforts that share its values, the movement's influence would be far greater with those efforts.
  • EA could be a force that helps bring order to a fractured philanthropic world - The philanthropic world is famously decentralized and uncoordinated. EA could be the place that helps coordinate this world and be the first resource that both altruistic individuals and organizations turn to. It could be an organizing force that helps organizations match with funders, could act as a clearinghouse for best practices, etc.
  • EA could lend its expertise in measuring hard to measure things - Measuring impact is an enormous challenge for many many impact organizations. EA has special expertise in this area that it could lend to the overall impact community.
  • EA could help the entire philanthropic sector become more effective - By taking a leadership role, providing real value, and acting as an exemplar, EA could exert its influence to help the entire philanthropic sector become more effective.

EA could massively scale the impact that it is having

  • EA could provide invaluable support beyond funding - I've spoken to several social impact leaders that were rejected by EA for grant funding in a manner that left a very poor impression on them. These are extremely impressive and accomplished altruistic leaders that EA should very much want to support in some fashion. CAA could help people like this find other funding opportunities, help them find collaborators and mentors, and encourage and support them in many other cost-effective ways.
  • EA could attract and mobilize millions of people - There are many ways to scale the movement in a cost effective manner (e.g. Alcoholics Anonymous), and a lack of funding is not what is preventing the movement from growing to be much larger. A more welcoming EA could conceivably grow its membership into the millions over time.
  • EA could support hundreds of causes and thousands of organizations - By providing scalable, cost-effective support beyond direct funding, EA could move beyond a scarcity mindset to support thousands of individuals and organizations that are working on a myriad of causes.
  • EA could have a much greater impact on the overall culture - EA leadership has started to understand the massive importance of culture and institutions.[4] A massive EA would have a much greater impact on both overall culture and on institutions.

Conclusion

EA has the opportunity to step into a leadership role and to have enormous influence. But in order for that to happen, at least some portion of the movement needs to start thinking in these terms. If I were to frame this as a criticism of EA it would be this: Stop thinking small and start thinking big, the opportunity in front of you is massive if you choose to lead.

One possible way to do that is to establish and learn from one or more initiatives within in EA that explicitly have this mission and orientation. It's much better to conduct experiments and collect data than to get stuck in theoretical debates about what might happen.[5]

 

 

  1. ^
  2. ^
  3. ^

    From Will MacAskill on Conversations with Tyler

    TYLER COWEN: Let me make a sociological observation of my own. If I think about making the world a better place, I think so much about so many things being downstream from culture, that we need to think about culture. This is quite a messy topic. It’s not easily amenable to what you might call optimization kinds of reasoning. Then, when I hear EA discussions, they seem very often to be about optimization — so many chats online or in person, like how many chickens are worth a cow, the bed net versus the anti-malaria program.

    I often think that this is maybe my biggest difference with EA — that EA has the wrong emphasis, pushing people into the optimization discussions when it should be more about improving the quality of institutions and management everywhere in a way that depends on culture, which is this harder thing to manage. This may even get back to subsidizing Mozart’s Magic Flute. There’s something about the sociology of EA that strongly encourages, especially online, what I would call the optimization mindset.

  4. ^

    From Will MacAskill on Conversations with Tyler

    WILL MACASKILL:  I think I’m going to surprise you and agree with you, Tyler. I’m not sure it’s about optimization, but here’s a certain critique that one could make of EA, in general or traditionally. It’s like, hey, you have a bunch of nerds. You have a bunch of STEM people. The way your brains work will be inclined to focus on technology or technological fixes and not on mushy things, like institutions and culture, but they’re super important. I, at least, think that that criticism has a lot going for that.

    . . .

    I do think that culture is just enormously important. That’s something I’ve changed my view on and appreciated a lot over the last few years, just as I started to learn more about history, about the cultural evolution literature, about Joseph Henrich’s work and our understanding of humanity as a species. Actually, one of my favorite and most underrated articles is by Nathan Nunn. It’s called “History as Evolution,” which I think is extremely good.

    . . .

    That is a way in which I think effective altruism could have a big impact, in the same way as the scientific revolution was primarily a cultural revolution — I shouldn’t use that term — primarily a revolution in culture, where people suddenly started innovating, and they started to think in a certain way. It’s was like, “Oh, we can do experiments, and we can test things, and we can tinker.” I actually see effective altruism as a cultural innovation that could drive great moral progress in the future.

  5. ^

    From EA and the current funding situation by Will MacAskill

    It’s much easier, and more reliable, to assess a project once it's already been tried.

  6. Show all footnotes

12

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

This is such a great and underappreciated point. I really hope that EA funders reflect and consider supporting a fellowship for EA impact amabassadors or something like that to work with other orgs and altruistic movements to measure impact:

"EA could lend its expertise in measuring hard to measure things - Measuring impact is an enormous challenge for many many impact organizations. EA has special expertise in this area that it could lend to the overall impact community.

Nice! Thanks for sharing these thoughts. As I mentioned, I think this is a useful framing though it'd  be also worth exploring if a new org is necessary or if there are other creative means to accomplish the same goal. Maybe its a new fellowship funding EA's (and folks new to the movement yet broadly aligned) to go serve as ambassadors to other do gooder organizations.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that