Hide table of contents

[I've been thinking a lot lately about EA as an R&D project. While collecting my thoughts, I had some notes on other ways of thinking of EA which might be valuable. This is not meant to be precise or complete.]

Intro

There is a constant debate on how we should define EA and what it entails. In this post, I present several modes of thinking about what is EA which might be useful in some context. My goal in writing this is to present and combine several old and new ideas, to hopefully spark new ways of thinking. This should also help to clarify that there are many ways of looking at EA, although the following is not at all aimed to be a rigorous taxonomy.

I find particularly interesting the distinction between EA as an individual project rather than a community project, which seems to me to be conflated frequently. I think that there is much more room for clarification and deliberation on this.

Modes of thinking about EA-

EA as a Question

EA Should be thought of as a question- “How can I do the most good, with the resources available to me?”.

Useful -

  • for being Cause Impartial.
  • for maintaining flexibility and seeking new information.
  • as a way of communicating openness.

EA as an Ideology

Effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence.

Useful -

  • to critically consider which viewpoints, questions, answers and frameworks are actually privileged in EA discussion.
  • when thinking in terms of principles and identity.

EA as a Movement

The mode of thinking that involves how the EA community revolves around a set of ideas, norms and identities.

Useful -

EA as a Community

Who are the people involved? How are they connected? What do they need?

Useful -

EA is an Opportunity

We are in a unique position to do a lot of good

Useful -

  • for enjoying the process of doing good better.

EA as a Moral Obligation

If there is a way to do a lot of good, we ought to do that. If we can do more good, we ought to do that instead. Can be very dependant on cost to self.

Useful -

  • for considering how much one should sacrifice.
  • when pondering the exact normative stance. What is good?

EA as a Worldview

Specifically mentioned in this post that a crucial assumption of EA is that we can discover ways to do more good. Also, it is a basic assumption the some ways of doing good are much better than others.

Useful -

  • for articulating underlying assumptions of the community and engage with criticism.
  • to systematically analyze what is still not known and what we need to research further.

EA is a commitment to Epistemology

In this post Stefan argus that EA is not about factual beliefs, but instead about epistemology and morality. In EA, the discovery process of facts involves the use of evidence and reason.

Useful -

  • when making personal or professional decisions, and we want to make sure that we are doing it right.
  • for setting a standard for the community's processes.

EA is an Individual Mission

People in EA should seek to do as much good as their limited resources allow, while analyzing their own worldview and moral stance and acting accordingly.

Useful -

  • for considering career/life options.
  • when bargaining in a moral trade.
  • when analyzing one self's marginal value (see [this response] from Hilary Greaves to the "collectivist critique").

EA is a partnership.

People with somewhat different moral perspectives and world-view agree to work together.

Useful -

  • when thinking how (and why) to contribute to each other's goals.
  • when help is needed from people we trust.

EA is smarter than me

A lot of decisions can be delegated to the EA set of ideas and leadership. I do not need to figure out exactly why, say, longtermism is correct because a lot of work has been done to convince a lot of people. This allows me to work on what I believe to be the most important thing to do without actually understanding why.

Useful -

  • to efficiently accept a world-view based on some simple and plausible assumptions.
  • when thinking about how we present our claims to the general community.
  • when we are wary of being cultish.

EA is a set of memes

There is a vastly growing set of ideas and insights arising from EA.

Useful -

EA is a set of Organisations and Community Leaders

EA is somewhat centralized and is influenced by a set of key individuals and organisation.

Useful -

  • when trying to affect the community and looking for points of influence.
  • when considering other dynamics in the community.
  • when seeking help or collaboration with a specific project.

EA is an inspiring community and social network

EA is awesome!

Useful -

  • when considering whether to attend EAG or not 😊

54

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:

EA as a "gap-filling gel" within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill's EAG London closing remarks, but the video isn't up yet so I'm not sure and don't want to put words in his mouth). The idea is that there's already lots of work in:

  • Making people healthier
  • Reducing poverty
  • Animal welfare
  • National/international security and diplomacy (incl. nukes, bioweapons)

And if none of these existed, "doing the most good in the world" would be an even more massive undertaking than it might already seem, e.g. we'd likely "start" with inventing the field of medicine from scratch.

But a large amount of altruistic effort does exist, it's just that it's not optimally directed when viewed globally, because it's mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:

  • Making people healthier and/or reducing poverty in the developing world through certain interventions (e.g. bednets, direct cash transfers) that turn out to work really well
  • Animal welfare for factory-farmed and/or wild animals
  • Global security from technologies whose long-term risks are neglected (e.g. AI)

And the role of EA is to fill those gaps within the altruistic portfolio.


As an antithesis to that mode of thinking, we could also view:

EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:

  • Some interventions which were posed with altruistic goals in mind turn out to be useless or even net-negative when scrutinized (e.g. Scared Straight)
  • Many broader trends which seem "obviously good" such as economic growth or technological progress, seem neutral, uncertain, or even net-negative in light of certain longtermist thinking

One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.

Even though I agree that presenting EA as Utilitarianism is alienating and misleading, I think that it is a useful mode of thinking about EA in some contexts. Many practices in EA are rooted in Utilitarianism, and many (about half from the respondents to the survey, if I recall correctly) of the people in EA consider themselves utilitarian. So, while Effective Utilitarianism is not the same as EA, I think that the confusion of the outsiders is sometimes justified.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region
Relevant opportunities