Hide table of contents

A new framing of EA’s collective impact

There is a current theme in the EA community to focus on how to maximize the collective impact of the movement, rather than just the impact of each individual. 

But what does it mean to maximize the impact of the movement? I suggest that the mission to ‘do the most good’, in the context of our collective potential, closely approximates ‘optimizing the Earth’.

 

Why ‘do the most good’ is roughly equivalent to ‘optimize the Earth’:

-       Optimize – make as good as possible, based on evidence and reason, acknowledging the potential for ongoing improvement

-       Earth – home to every human with capacity for impact, and every known sentient being. The seat of our collective impact on the world beyond Earth.

 

Why would aiming to optimize the Earth enable us to do more good?

1)    The Earth optimization framing of the EA mission gives focus to what it means to maximize collective impact. It points to a unified outcome of our collective work, in a way that ‘do the most good’ does not. With an Earth optimization mindset, we can rationally consider:     

  • What are the meta-outcomes of an optimized Earth?      
  • What lead-metrics should we measure to track progress towards an optimized Earth?
  • What are the most impactful causes to optimize the complex system of Earth?
  • How to prioritize the best combination of causes given that progress on some causes affects the expected impact of other causes?

2)    Optimizing the Earth hints at what may be possible decades from now, beyond the current reality of the nascent EA movement. It may help the EA movement to set very long-term goals, and back-cast the roadmap to achieve them

3)    Optimizing the Earth may include increasing or maximizing the marginal impact of every individual, not just those who currently self-identify as EA. It invites us to consider a more expansive vision of our potential collective impact.

4)    Aiming to optimize the Earth will help us to identify new priority cause areas that have strategic relevance in bringing about the best world possible, but which may have limited immediate/direct impact in and of themselves.

5)    An Earth optimization methodology will give context to current priority cause areas, and help us to evaluate their strategic relevance and relative urgency, in the mission of maximizing our collective impact. This may result in non-trivial adjustments to which causes are deemed highest priority.

6)    The Earth optimization framing of the EA mission requires us to consider not only our marginal impact but also our collective, total, global impact – something that has not been much of a focus in the EA community up until now. This challenges us to develop our cause prioritization methodology to account for how different causes, when pursued in concert, can be more than the sum of their parts.

7)    Earth optimization can be approached with methodological rigor, using complexity theory and systems science. By modeling the Earth as a complex system, we may be able to develop a ‘general theory of cause prioritization’, not only to prioritize top cause areas, but also to evaluate and optimize the impact of any actor in the system of Earth.

 

This is post 1 of 3:
Post 2:
"Effective Altruism Paradigm vs Systems Change Paradigm"
Post 3: "5 Types of Systems Change Causes with the Potential for Exceptionally High Impact"

Comments5


Sorted by Click to highlight new comments since:

It's obviously the case that "do the most good" is equivalent to "optimize the Earth". HPMOR readers will remember: "World domination is such an ugly phrase. I prefer to call it world optimisation."

But given that they're equivalent, I don't see that changing the label offers any benefits. For example, the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers (corresponding to 1(iv)).

the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers

I think impact 'spill-overs' between causes is a good representation of how most EAs mentally think about the relationship between causes and impact. However, I see this as an inaccurate representation of what's actually going on, and I suspect this leads to a substantial mis-allocation of resources.

I suspect that long term flow-through effects typically outweigh the immediate observable impact of working on any given cause (because flow-through effects accumulate indefinitely over time). 'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

I see 'Earth optimization', as a useful concept to help us develop our cause prioritisation methodology to better account for the inherent complexity of the world we aim to improve, better account for long run flow-through effects, and thus help us to allocate our resources more effectively as individuals and as a movement.

'Spillover' is a common term in economics, and I'm using it interchangeably with externalities/'how causes affect other causes'.

'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

Spillovers can be simple or complex; nothing in the definition says they have to be "neatly attributed". But you're right, long-term flow-through effects can be massive. They're also incredibly difficult to estimate. If you're able to improve on our ability to estimate them, using complexity theory, then more power to you.

It could be a useful framing. "Optimize" to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn't your intention, I'm just saying that "optimize" does have this connotation. I personally prefer "maximally benefit/improve the world." It's almost the same as your expression but without the make-good-even-better connotation.

I think EA's have always thought about impact of collective action but it's just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.

I like this phrasing, but maybe not for the reason you propose it.

"Doing the most good" leaves implicit what is good, but still uses a referent ("good") that everyone thinks they know what it means. I think this issue is made even clearer if we talk about "optimizing Earth" instead since optimization must always be optimizing for something. That is, optimization is inherently measured and is about maximization/minimization of some measure. Even when we try to have a generic notion of optimal we still really mean something like effective or efficient as in optimizing for effectiveness or optimizing for efficiency.

But if EA is about optimizing Earth or doing the most good, we must still tackle the problem of what is worth optimizing for and what is good. You mention impact, which also sounds a lot to me like some combination of effectiveness and productivity multiplied by effect size, yet when we are this vague that makes EA more of a productivity movement and less of a good doing movement, whatever we may think good is. The trouble is that, exposing the hollowness of ethical content in the message, it makes it unclear what things would not benefit from being part of EA.

To take a repugnant example, if I thought maximizing suffering were good, would I still be part of EA since I want to optimize the Earth (for suffering)?

The best attempt at dealing with this issue has, for me, been Brian Tomasik's looks at dealing with moral multiplicity and compromise.

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal
 ·  · 8m read
 · 
Confidence Level: I’ve been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I’ll try to flag the more speculative points when I can (the * indicates points that I’m less certain about).  I think it’s really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks.  Therefore, I also think one of the most important ways to have a large impact in university (and in general) is to organize/start a university EA group.  Impact Through Force Multiplication 1. Scope – It's easy to be scope insensitive with respect to movement building and creating counterfactual EAs, but a few counterfactual EAs potentially means millions of dollars going to either direct work or effective charities. Getting one more cracked EA involved can potentially double your impact! 1. According to this post from 2021 by the Uni Groups Team: “Assuming a 20% discount rate, a 40 year career, and $2 million of additional value created per year per highly engaged Campus Centre alumnus, ten highly engaged Campus Centre alumni would produce around $80 million of net present value. The actual number is lower, because of counterfactuals.” It should be noted that campus centre alumni is referring to numbers estimated from these schools. 2. They also included an anecdote of a potential near-best-case scenario that I think is worth paraphrasing: The 2015 Stanford EA group included: Redwood CEO Buck Shlegeris, OpenPhil Program Direct