Hide table of contents

Recently, a growing group of EAs, EA-adjacents, post-EAs and EA-curious folk have been gathering and organising around a new term - integral altruism (int/a). The central claims of int/a are that the EA toolkit is powerful but incomplete, and that EA can learn from other movements who are trying to improve the world.

The goal of int/a is to find a broader approach to altruism by integrating EA with epistemics/ontologies/world models/language/culture from outside of the EA/rationalist bubble. More specifically, we reckon EA can be most constructively complemented by learning from movements, communities and thinkers who emphasise wisdom. Accordingly, our intellectual lineage is a combination of EA/rationalism and the liminal/metamodern/metacrisis world.

We’ve run a whole bunch of events of various flavours and have plans for more. These included residential retreats, reading/discussion groups (e.g.), a speaker series (e.g.), deliberative technology experiments (e.g.), workshops (e.g.), hackathons (e.g.), and more. In the future we’d love to try running more ambitious projects like conferences, incubators, or fellowships. We’re currently looking for funding.

What is this for?

There are many of us who want to improve the world but feel that EA in its current form is unable to sustainably support us to cultivate and practice altruism. Some common reasons for this are

  • An exclusive focus on a narrow range of cause areas (e.g. the coalescing around AI safety), in tension with radical uncertainty & cluelessness,[1]
  • A lack of trust in epistemic tools outside of formal rationality (like intuition, metarationality, ecological rationality, Verveke’s 4Ps, or The Heart™),
  • An action bias that may be leading to negative effects (like exacerbating the AI race or the whole FTX drama),
  • An underemphasis in systems change as a cause area,
  • A culture that can result in unsustainable personal sacrifices leading to burnout,
  • A shadow side of the movement (e.g. status-seeking, power-seeking, guilt-escaping) that may be messing with collective epistemics.

A common theme linking all of these issues is a desire for more wisdom. By wisdom we mean[2] recognising the limits of our own knowledge, awareness of context, perspective-taking, and careful consideration of how to balance or integrate different viewpoints and interests.

A core motivation for more wisdom is recognising that we are radically uncertain[3] about the problems we’re trying to solve and the more broad question of how to do good. Such radical uncertainty calls for a more careful, robust, and flexible portfolio of frames/tools/approaches.[4]

In the EA context, wisdom caches out as seeing EA as not the one-and-only but one of a number of frameworks for changemaking that can be integrated to produce something more robust. This means letting go of the search for a scientific “view from nowhere” on how to solve the problem of altruism, and being aware of the cultural conditioning that lies behind any framework for doing good. It also means making space for other values besides “maximize impact”.[5]

int/a’s goal is to create a new network, culture, and formal(ish) framework that supports those of us who take radical uncertainty seriously and desire this broader view on altruism to become the best versions of ourselves and do our part in bringing about a flourishing future.

The (tentative) integral altruism principles

At the first int/a summit in summer 2025, we ran a workshop aimed at attempting to more clearly define integral altruism. The result was a set of principles we would like to embody and guide us in doing good.

Thanks to Aaron Halpern, Ben R. Smith, Brayden Beckius, Christine Tan, Elisa Paka, Finn Clancy, Gamithra Marga, Georgie Nightingall, Jon Hall, Katie Calvert, Luke Fortmann, Matilda du Rui, Patrick Gruban, Plex, Tildy Stokes and Toby Jolly for their Very Sensible And Quite Profound contributions.

We intend the presentation here to be descriptive rather than convincing - arguing for the merits of these principles is beyond the scope of this post (we may publish arguments in a future post!).

The principles are not final; we expect our understanding of this space to evolve over time. The principles are also currently somewhat abstract, in the future we hope to translate these to be more concrete & action-guiding. With those caveats out of the way, here is what we came up with.

1. Full-Spectrum Knowing

We want to integrate EA’s rigorous, grounded, rational epistemics with other valuable ways of knowing like embodied intuition, ecological rationality, or Verveke’s 4Ps.

This comes from recognising the limits of formal rationality in taking effective action in the real world, and seeing that reason & evidence is not sufficient for attuning to what is most important. It means taking other forms of knowing seriously but also knowing when to use them.[6] It means listening to all parts of ourselves, resulting in action that is internally aligned and authentic.

In practice, this could mean

  • Experimenting with including the 4Ps in discussions,
  • Augmenting decision-making with meditative (e.g. mindfulness), contemplative (e.g. journaling), embodiment (e.g. focusing), relational (e.g. collective insight), or dialogical (e.g. socratic questioning) practices.
  • Applying integration practices (like IFS or core transformation) to our altruistic goals.

There’s a thread you follow. It goes among
things that change. But it doesn’t change.
People wonder about what you are pursuing.
You have to explain about the thread.
But it is hard for others to see.
While you hold it you can’t get lost.
Tragedies happen; people get hurt
or die; and you suffer and get old.
Nothing you do can stop time’s unfolding.
You don’t ever let go of the thread.
(William Stafford)

2. Moving at the Speed of Wisdom

We want to integrate EA’s action-oriented energy with discernment of when to take high-impact actions and when to wait until the next graceful move reveals itself.

In other words, this means integrating the yin and the yang. Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.

In practice, this could mean

  • Generally seeking stakeholder input before taking high-impact actions,
  • Avoiding unnecessarily power-seeking moves on a both personal (e.g. climbing to the top of orgs) and collective (e.g. founding AI labs and racing to the front) level,
  • Emphasising process-orientation over goal-orientation.

You thought, as a boy, that a mage is one who can do anything.
So I thought, once. So did we all.
And the truth is that as a man’s real power grows and his
knowledge widens, ever the way he can follow grows narrower:
until at last he chooses nothing,
but does only and wholly what he must do…
(Ursula K. Le Guin)

3. Decoupling & Recoupling

We want to embrace EA’s analytical & decoupling approach of isolating the most important problems while also attending to the larger system & our place within it.

Different cause areas and x-risks are highly interconnected. While decoupling problems from their context can be useful for making progress, it can also make us blind to this entanglement. We want to adopt both decoupled frames and contextualizing frames (like the metacrisis).[7]

This also means seeing our place within the system: Maintaining awareness of the assumptions underpinning the cultural paradigm we are operating in (e.g. capitalism, colonialism, techno-solutionism, or victim/oppressor narratives).

In practice, this could mean

  • Using tools from systems thinking & complexity science,
  • Taking systems change & cultural change seriously as cause areas,
  • Creating cross-cultural fellowships in epistemically distant communities.

There is no such thing as a single-issue struggle
because we do not live single-issue lives.
(Audre Lorde)

4. Practicing Fractal Altruism

We want to balance EA’s scope-sensitive ambition to work towards the largest positive impact with intrinsic values at the local scale like friendship, love, beauty, family and the sacred.

This means being good to ourselves and the people around us as well as the rest of the world. It doesn’t mean forgetting about impact, but rather finding ways to cooperatively integrate scope-sensitive altruism with other ends in one’s life by imaginatively searching for win-wins between these ends.

In practice, this could mean

  • An empathetic approach to career paths that takes into account not only effectiveness but how that work can enhance one’s own life,
  • Running events that simultaneously nourish individuals, cultivate deep connections, and lead to impact at scale,
  • Explicitly & honestly examining which tradeoffs between personal, community and global goods we are willing to make.

Start close in,
don’t take the second step
or the third,
start with the first
thing
close in,
the step
you don’t want to take.
(David Whyte)

5. Inner Work, Outer Change

We want to integrate EA’s culture of supporting one’s intellectual, productive and career growth, while also supporting psychological growth as a foundation for impact.

Psychological, emotional and spiritual development can help us cultivate a genuine desire for the wellbeing of others, resulting in altruism grounded in truth rather than being driven by guilt or pride. Such growth can also improve our epistemics by shining light on What’s Going On For Us and inspire action by deeply connecting us to the value we’re fighting for.

In practice, this could look like

  • Using practices like metta meditation to cultivate our altruistic drive,
  • Collective shadow work on the topic of altruism (like this event),
  • Tracking our growth using frameworks like the Inner Development Goals.

I slept and dreamt that life was joy.
I awoke and saw that life was service.
I acted and behold, service was joy.
(Rabindranath Tagore)

 

Putting this all together, integral altruism is a community for those who want to help improve the world in a way that is effective, wise, and sustainable - by integrating reason with embodiment, agency with patience, decoupling with contextualizing, impartial values with local values, and inner work with outer change.

What’s happening?

We’re experimenting with a number of flavours of events in order to cultivate the community and create a space for the int/a framework to develop. Our main physical hub is London, with nascent communities springing up in Berlin and Paris.

Integral Altruism Summit #1, July ‘25, Kent, UK

Some experiments we’ve run so far are

And we have a big list of other ideas (like conferences and intro courses) we’d like to put into motion.

The conceptual development of the int/a framework is slowly happening but is still in its early days. We recently ran a frameworks hackathon, you can check out some of the ideas that came out of that here.

We have a core of engaged people running the show: six “core stewards” (currently Christine Tan, Patrick Gruban, Tildy Stokes, Toby Jolly, Finn Clancy, Euan McLean). We’ve implemented some governance structure that we’re slowly testing and evolving.

Our intended relationship with effective altruism

We’d love int/a to have a symbiotic relationship with EA. We reckon int/a’s goals are win-win with EA’s goals in a number of ways:

  • Being a place to help those who have drifted away from EA to reconnect with their altruistic nature and put that into practice once more,[8]
  • Generating constructive dialogue with the EA philosophy, and red-team EA as a movement,
  • Creating a bridge between EA and other movements, resulting in useful knowledge exchange, especially introducing more wisdom to EA.

That being said, we’re also aware of the danger of potential zero-sum dynamics between int/a and EA, and would like to avoid them as much as possible. One thing we are afraid of is int/a gravitating towards the “just bitching about EA” attractor state, which is definitely not the vibe we’re going for. Another concern is “taking people away from EA”. We don’t intend to dissuade people from doing impactful work by EA lights, in fact many of us in the movement are doing incredibly canonical EA jobs.

Wanna get involved?

If you’re intrigued or excited about this general direction, you can register your interest for getting involved here, keep an eye out for future events, and subscribe to our substack.

We’re also currently looking for funding since we’re severely funding constrained. If you'd like to support int/a to grow, or know someone who might, you can find our funding page here.


Thanks to Chris Pang, Christine Tan, Elisa Paka, Gamithra Marga, Georgie Nightinghall, Guillaume Corlouer, Hunter Muir, Jack Kock, Jonah Wilberg, Patrick Gruban, Simon Haberfellner, and Toby Jolly for feedback on early drafts.


  1. Which can lead to those who are drawn to other cause areas becoming alienated from the community. ↩︎
  2. Wisdom is a highly nebulous concept and is used in a number of different ways. To gain some precision, we used a definition of wisdom above based on the work of Igor Grossman, one of the leading wisdom scientists. Grossman identified a central component of wisdom to be persectival metacognition - which caches out as the definition we give here. ↩︎
  3. In the book Radical Uncertainty, Mervyn King & John Kay define radical uncertainty to be when a situation is unresolvable by further research in principle, where we cannot enumerate the range of possible options or futures, and where previously inconceivable events can emerge. ↩︎
  4. See Jonah Wilberg’s excellent article for more on how radical uncertainty calls for wisdom. ↩︎
  5. According to Logan Strohl’s model of EA burnout: “EA burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.” By creating a space that supports the integration of different values, int/a can support people to sustainably engage with EA work. ↩︎
  6. For example, we don’t want this to be an excuse to throw all science & rationality away in favour of just going with our emotions - we want to understand the strengths of both and know what kinds of questions call for one over the other. ↩︎
  7. While “zoomed out” frames can lead to compromise on tractability, we would like to make explicit the tradeoff between tractability and better epistemics (via seeing more of the system) rather than just automatically attending to the decoupled frame. ↩︎
  8. See footnote 5. ↩︎

77

8
6

Reactions

8
6

More posts like this

Comments8
Sorted by Click to highlight new comments since:

I appreciate the efforts to try and bridge two projects you think are valuable. A few thoughts/comments/disagreements:

1. One way to read this seems to me like it could boil down to: if you like EA, but also want some more metacrisis/sensemaking/systems thinking than what EA typically offers, then that's us. Come say hi.

2. I feel like there's some irony here where EA conversation norms tend towards very direct communication, and sensemaking folks tend to speak in a more indirect way. In pitching integral altruism I can't help but get the feeling it is framed in fairly indirect language at times. It's hard to name the exact dynamic but I found myself working hard to understand parts of what this paragraph is trying to say (maybe that's just me):

Psychological, emotional and spiritual development can help us cultivate a genuine desire for the wellbeing of others, resulting in altruism grounded in truth rather than being driven by guilt or pride. Such growth can also improve our epistemics by shining light on What’s Going On For Us and inspire action by deeply connecting us to the value we’re fighting for.  

3. Some of these points seem surprising to include as what is added by integral altruism as they seem to me as a regular part of EA discourse. I'm thinking about the sections that discuss valuing other things in life besides impact, and that inner work can lead to more impact.

4. I think a big decision point here is whether or not the merits of integral altruism will be argued on the territory of EA assumptions or not, and this post seems to move between the two. For example, you make the claim that there are real downsides to seeing x-risk in isolation rather than in the way it is interconnected with other problems. This seems big and important if true, and seems like something that could be argued comfortably within the framework of EA norms. I appreciate that puts the burden on you, but if you persuade folks here, I imagine that would be a big win for everyone. FWIW whenever I've listened to folks talk about the metacrisis I've literally not been able to understand the arguments. Could be a huge service to try and make the case for the metacrisis in EA friendly language. 

Cool project :) There's definitely something very important in the rough direction you're pointing. Some thoughts on how to gain more clarity on it:

  • I suspect that it'd be worth your time to think a bunch about the relationship between altruism and ethics. In some sense, I think of ethics (and particularly virtue ethics) as already a kind of "integral altruism"—i.e. ethics as a set of principles and heuristics by which we can remain in integrity with ourselves and others, thereby allowing our compassion to actually make the world better.
  • I think that the hippie/metamodern/etc communities are very good at some aspects of ethics, but quite bad at others. In general they tend to err on the side of agreeableness, rather than e.g. being honest about unpleasant truths. It feels valuable to take this broad worldview and then try to add a bunch of moral courage that it's currently missing (analogous to how you can think of EA as adding moral courage to econ-brained thinking).
    • However, I feel pretty confused about how to actually help people aim their moral courage towards being ethical, since IMO neither EA nor most inner work helps much with this. One litmus test that I use to evaluate whether inner work is actually making people braver is whether they're more willing to break political taboos afterwards (e.g. for people in the UK, by making a fuss about the Pakistani rape gangs); however, this seldom comes up positive. Another litmus test is whether they're more willing to face the possibility of physical violence when appropriate (e.g. when a crazy person is being a bit menacing in public, do they still just look away?). These are just illustrative examples but hopefully they point at what I think is missing by default.
  • The stuff on cluelessness feels like it's conceding a little too much to the EA/bayesian frame. It's implying that you should have a model of the entire future in order to make decisions. But what I think you actually want to claim is that it's sensible and even "rational" to make non-model-based decisions (e.g. via heuristics, intuitions, etc). Some other terms that might be better: bounded rationality, group agency, Knightian uncertainty. I sometimes use "distributed agency" or "coalitional agency", but I think they won't make sense to most of your readers.
  • The problem with stuff like systems thinking & complexity science is that it's not really aiming to make the same kind of scientific progress as sciences like physics or evolutionary biology have made. More generally, it seems easy for movements like integral altruism to fall into the trap of not pinning down core ideas and claims. But insofar as integral altruism is true, it suggests that something important about the expected utility maximization paradigm is false, which someone should pin down. In other words, imagine that someone from the 22nd century comes back and tells you that something like integral altruism was actually scientifically/mathematically correct. What's the version of integral altruism that actually leads to you figuring that out?

I filled in your form, and am excited to see where you take this!

The stuff on cluelessness feels like it's conceding a little too much to the EA/bayesian frame. It's implying that you should have a model of the entire future in order to make decisions. But what I think you actually want to claim is that it's sensible and even "rational" to make non-model-based decisions (e.g. via heuristics, intuitions, etc).

I'd be interested in hearing more on what exactly you mean by this. Insofar as someone wants to make decisions based on impartially altruistic values, I think cluelessness is their problem, even if they don't make decisions by explicitly optimizing w.r.t. a model of the entire future. If such a person appeals to some heuristics or intuitions as justification for their decisions, then (as argued here) they need to say why those heuristics or intuitions reliably track impact on the impartial good. And the case for that looks pretty dubious to me.

(If you're rejecting the "make decisions based on impartially altruistic values" step, fair enough, though I think we'd do well to be explicit about that.)

I strongly disliked this post for reasons that I'm not sure how to articulate. It seems to be advocating for a sort of lack of grounding in cost-effectiveness that is the thing that makes EA good. Or maybe my issue is that this post advocates for things that are difficult to disagree with ("full-spectrum knowing"; "wisdom"), without acknowledging tradeoffs (why do EAs allegedly not put enough priority on full-spectrum knowing?) or not saying anything concrete about how EAs could do more good.

[edited to be more polite]

I don't think they are trying to convert the EA community into something else - they are pretty clearly creating separate spaces for their movement/community. [1]

Describing their post as using "applause lights" seems at best uncharitable, and "absolute nonsense" is just rude. There are several well-received posts on the forum around "[a]ugmenting decision-making with meditative (e.g. mindfulness) [practices]" like this one and this one. It's fine to dislike their principles, but I think it's worth making an effort to be encouraging when fellow altruists try to build on the "project" of Effective Altruism;.

  1. ^

    e.g. they say "That being said, we’re also aware of the danger of potential zero-sum dynamics between int/a and EA, and would like to avoid them as much as possible. One thing we are afraid of is int/a gravitating towards the “just bitching about EA” attractor state, which is definitely not the vibe we’re going for. Another concern is “taking people away from EA”. We don’t intend to dissuade people from doing impactful work by EA lights, in fact many of us in the movement are doing incredibly canonical EA jobs." and have run many events themselves under their own banner.

     

You're right, I was unnecessarily hostile. I edited the comment to tone it down.

Thanks for writing this!

You're describing integral altruism as broader than EA, but if I understand you correctly, it's also narrower in many ways. Some examples:

Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.

–> Effective altruism doesn't take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.

take radical uncertainty seriously

–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.

altruism grounded in truth rather than being driven by guilt or pride

–> EA doesn't say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).


Some of the things you describe (especially the 'different ways of knowing') seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.

Overall I'm not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.

Thanks Tobias, some good threads to pull here!

Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven't found a clean answer.

You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.

For example:

Effective altruism doesn't take a position on whether we are in conflict with the natural unfolding of the universe.

Yea, EA doesn't explicitly say anything about that, but what we're pointing at is perhaps a cultural or semi-conscious current that pervades a lot of EA work (possibly this is more relevant to rationalism than EA). This line was inspired by in part by Joe Carlsmith's An Even Deeper Atheism, which points out a current underlying a lot of EA/rat/AI safety that is born out of a deep mistrust of everything (might not be doing the essay justice but that's the general direction).

I'm not necessarily saying this current is bad, rather that we should have an awareness of it and be able to step outside of that frame of mind when it is not helping us, and integrate different frames. The hope is that int/a can more explicitly/consciously find the right balance between the yang-y mistrusting the universe vibe and the yin-y trusting the universe vibe.

Curated and popular this week
Relevant opportunities