I haven't done philosophy in a while, might be missing something, but wanted to highlight what I think is the strongest objections to the view[1] in a way that may be more salient than the framing in section 6. It's probably a reason why many might prefer a total view.
To be clear, I do think the Saturation View improves on other non-total views I know of, and I appreciate that they flag some of its hard-to-stomach implications. But I still think the post understates how bad the separability issue is. So here are two short points:
Non-separability is really bad.
The core problem is that facts about/experiences of wholly unaffected people can change the value of the affected person's experiences. If there are already sufficiently many people elsewhere with sufficiently similar experiences, then an additional person having an extremely deep, meaningful, happy life adds near-zero marginal value. That seems very hard to accept.
And for negative experiences the implication is potentially even less intuitive. An additional torturous experience can add almost no marginal disvalue if enough sufficiently similar torture already exists. They discuss this under the “cheap suffering” problem & call it the strongest argument against the view, but I think it is worth emphasizing just how unintuitive of a conclusion this is. From the victim’s perspective, the torture is not any less bad because other similar torture already occurred. But the saturation view says that, from the point of view of population value, their torturous experience would matter hardyl at all.
ETA: Relatedly, the view assigns value to our experiences depending on empirically inaccessible facts. Whether sufficiently distant aliens have sufficiently similar experiences is something we probably can't know, but it would radically change how our actions matter. That seems strange.
I don't think the 'tameness' of the view recovers that much?
My understanding is that the Saturation View does better because violations of separability are localized. Ancient Egyptians or distant aliens only affect the marginal value of new lives if their experiences are sufficiently similar. So in many "normal situations", the view behaves roughly separably.
But the separability worry still holds with sufficiently large numbers. If enough sufficiently similar unaffected lives exist elsewhere, they can radically change the marginal value of what we do here.
And population ethics is full of large-number objections. The Repugnant Conclusion itself gets its core intuitive force from considering sufficiently enormous populations, and is also not a “normal situation.” So if the Saturation View is partly motivated by avoiding the very bad large-number implications of total views, then its own large-number implications seem fair game too.
the authors agree with this, afaict
Thanks for writing this!
You're describing integral altruism as broader than EA, but if I understand you correctly, it's also narrower in many ways. Some examples:
Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.
–> Effective altruism doesn't take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.
take radical uncertainty seriously
–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.
altruism grounded in truth rather than being driven by guilt or pride
–> EA doesn't say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).
Some of the things you describe (especially the 'different ways of knowing') seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.
Overall I'm not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.
Good points, thank you!
They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure.
Do we know whether this is true for the OAF board?[1] Sam Altman is on it, and he definitely believes something along these lines but it's less clear for the others. Here's a ChatGPT and a Claude answer on this, which points towards the others being less bullish & concerned (but also a lack of information about what they believe). I expect there to be a range of views on timelines & transformativeness of AGI among the board members – which probably makes it more likely that their spending targets are compatible with the foundations mission.
Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Sam Altman
It looks much nicer than the original imo. If I didn't have context, I'd probably be confused though.
Why 80,000 hours? And what is the pie chart / watch face analogy about? On first glance I’m not sure whether it’s about career choice, time management, life balance, or some '5pm' metaphor.
I looked at it in this order: (1) “80,000 hours”, (2) pie chart / watch face, trying to figure it out, (3) subtitle, (4) endorsement. But the subtitle and endorsement are doing most of the work of telling me what the book is actually about and whether it’s for me.
Maybe some of this is intended, to make people pick up the book and try to find answers. :)
I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think 'they should spend 5%+ in year one' follows.
Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their 'endowment' is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn't call a new foundation planning to deploy $1 billion in its first year "conservative".
What I'd most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.
I think there are a few plausible reasons that don't require "undemocratic power-seeking" as the primary explanation:
I expect that if your ideas are resonating with policymakers and people are getting appointed to relevant roles because they're competent, bad faith opponents will target you roughly the same as if you'd been pulling strings behind the scenes in dubious ways.
Maybe I'm missing something. SB 1047 seemed like a relatively transparent action, that followed the democratic process. Is your point that undemocratic power-seeking actions prior/unrelated to SB 1047 likely explains the stronger opposition to SB 1047?