T

tootlife

33 karmaJoined In high school

Comments
8

My post is a philosophical critique of EA's epistemic culture. The core argument is that our community has a built-in preference for the easily measurable, and I'm exploring the downsides of that bias.

On your points about proof (1 & 3):

The demand for pre-existing, legible proof for a class of interventions highlights the exact paradox I'm concerned about. If we require a high standard of quantitative evidence before trying something new, we may never run the experiments needed to generate that evidence. This creates a catch-22 that can lock us out of potentially transformative work.

On your point about existing orgs (2):

You're right, these organizations do this work. My argument isn't that systemic change is absent from EA, but that it's on the periphery. It's not central to the movement's core narrative or funding strategy in the way direct interventions are. The question is why this type of work isn't more foundational to our approach across all cause areas.

On your final question (the "root cause"):

Good catch on my "root cause" phrasing—it was imprecise. A better term is "foundational systems" or "underlying structures." My hypothesis isn't about a single cause, but that improving these foundational systems acts as an "impact multiplier" that makes all direct interventions more effective. The core problem remains that the impact of strengthening these systems is diffuse and hard to measure.

So let's view from the individual's perspective. I think looking at that perspective reveals a fundamental difference in the types of value we're comparing. The analogy breaks down because it incorrectly assumes that the value of money and the value of lives are structured in the same way.

1. The Non-Linear Utility of Money (The Insurance Case)

For an individual, money has non-linear utility. The first $1000 you earn is life-changing, while an extra $1000 when you're a millionaire is not.

Losing your last dollar is catastrophic—a state of ruin. Therefore, it is perfectly rational to pay a small premium, which represents a tiny certain loss of your least valuable money, to prevent a small chance of a catastrophic loss of your most valuable money. The insurance decision is rational precisely because of this non-linear value.

2. The Linear Value of Saving Lives (The EA Dilemma)

In contrast, the moral value of saving lives is treated as linear in these calculations. The first life saved is just as valuable as the 1000th. There is no "diminishing return" on a human life.

Because the value is linear, we don't need to worry about utility curves. We can compare the outcomes directly. My argument is that when comparing these linear values, a guaranteed outcome (saving 1 life) is rationally preferable to an action with a 99.9% chance of achieving nothing.

The insurance analogy relies on the non-linear utility of money to be persuasive. Since that feature doesn't exist in our dilemma about saving lives, the analogy is flawed and doesn't challenge the original point.

That's an interesting point about the portfolio of actions. You're suggesting that while the existential risk itself is a one-shot scenario for humanity, we might be able to take multiple 'shots' at reducing it. In essence, you're proposing that if a single, highly uncertain intervention isn't reliable, we can average out the risk by attempting many such interventions.

However, this then shifts the problem, rather than solving it. Even if we could, in theory, take a 'thousand shots' at reducing x-risk, we still lack a robust framework for comparing the aggregated expected value of these highly speculative, low-probability interventions against the more certain, smaller-scale interventions. My original argument is precisely that our current EV tools are inadequate for such comparisons, especially when dealing with epistemic uncertainty and truly one-shot, high-stakes scenarios for which a 'portfolio' approach might still be an insufficient or unproven solution.

That analogy is flawed because it equates a pooled risk portfolio with a singular risk.

An insurance company isn't taking a 0.1% gamble. It's managing a statistical certainty across a portfolio of millions, where the law of large numbers guarantees that the 0.1% rate is predictable and profitable.

An existential risk is a one-shot scenario for humanity. We are the single participant. There is no larger portfolio to average out the 99.9% chance of failure.

The analogy fails because joining a predictable risk pool is not logically equivalent to taking on an unpooled, singular gamble.

I am not here to ask if Earth can have 20 billion people or not, I'm here to ask about the dilemma of saving lives might not always be good. If you want, you can up the number to 200 billion people and see that Earth can't handle that number. At a point, you must say that saving more people is not a good thing, and it will just be a bad thing. And now comes the dilemma: if you say that saving people is not for the sake of saving people, you might be justifying murder (or preventing births) as “the most effective thing to do.”

Thank you for laying out the "Malthusian effects" vs. "benefits of scale" framework so clearly. As someone engaging with the Effective Altruism movement's ideas from an outside perspective, I find this a helpful way to structure the problem.

However, from my viewpoint, I have a deep concern with the "benefits of scale" argument as it's presented. When we justify adding a million people by the chance that one of them might be the next Norman Borlaug, we are judging human lives based on their instrumental value—what they might produce for the rest of us.

This framing implicitly creates a hierarchy of human worth. The value of the 999,999 non-innovators becomes secondary, justified only by the possibility of the one "genius" they might help produce. They become a means to an end, not ends in themselves.

This is precisely the logic that opens the door to eugenics. To be clear on what I mean by this, eugenics is the belief that humanity can and should be improved by controlling reproduction—encouraging births among those with "desirable" traits while discouraging or preventing births among those with "undesirable" traits. At its very core, it is a system of valuing people instrumentally based on their perceived biological or social fitness.

If we accept the principle that a person's worth is tied to their potential intelligence or creativity, what logical principle stops us from concluding that lives with less of that potential are less worth creating or protecting?

This brings me back to my central concern. For an observer like myself, it seems the only robust defense against these incredibly dangerous historical ideas is to build an ethical framework based on the inherent value of every single life, regardless of whether that person becomes a saviour or lives a simple, ordinary existence.

I didn't fully understand your opinion, but I noticed you're not against the principle of stopping births. My concern is that this line of thinking can open the gates to very dangerous ideas.

If you accept that saving people isn't always good, or that preventing certain births can be morally justified, then it becomes possible to argue that we should stop genetically defected people from having children—also for the "greater good." Just like one might argue we should prevent births to avoid suffering, one could now argue that we should prevent certain types of people from being born in order to "improve humanity."

This is a very slippery slope. It might start with the idea of doing good, but it risks justifying things like eugenics. Once you accept that some lives shouldn’t be saved or born, you risk treating some people as worth less than others.

You might be right that the world doesn’t have to be twice as good just because it has twice as many people. But if you agree that a world with more people is better than one with fewer people, then what happens when I increase the population to 20B? At some point, you don’t get a better world—you get a world where everyone is suffering and starving due to lack of resources. The paradox remains: if "more lives = better world" leads to unbearable conditions, can we really say it's better?