I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.
Blog: aaronbergman.net
Thanks!
(Ideal): annihilation is ideally desirable in the sense that it's better (in expectation) than any other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just don't think it is overwhelming evidence against the truth of the view.
Perhaps a reframing is "would this still seem like a ~reductio conditional on a long reflection type scenario that results in literally everyone agreeing that it's desirable/good?"
And I don't mean this in the sense of just "assume that the conclusion is ground truth" - I mean it in the sense of "does this look as bad when it doesn't involve anyone doing anything involuntary?" to try to tease apart whether intuitions around annihilation per se are to any extent "just" a proxy for guarding against the use of force/coercion/lack of consent.
Another way to flip the 'force' issue would be "suppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesn't care about their own existence or have valenced experience)?"
I'll note that I can easily dream up scenarios where we should force people, even a whole society, to do something against its will. I know some will disagree, but I think we should (at least in principle, implementation is messy) forcibly prevent people from totally voluntarily being tortured (assume away masochism - like suppose the person just has a preference for suffering that results in pure suffering with no 'secretly liking it' along for the ride)
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I don't think that most of the proposed "moderating factors" do much to soften the blow.
This one I more eagerly bite the bullet on, it just straightforwardly seems true to me that this is possible in principle (i.e., such a world could/would be genuinely very bad). And relevantly, orthodox utilitarianism also endorses this in principle, some of the time (i.e. just add up the utils, in principle one suffering monster can have enough negative utility)!
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But I'm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like you'd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But that's very hard to credit, given the above-quoted case where annihilation is "obviously preferable".)
I don't have an answer to this (yet) because my sense is just that figuring out how to make overall assessments of probability distributions on various moral views is just extremely hard in general and not "solved".
This actually reminds me of a shortform post I wrote a while back. Let me just drop a screenshot to make my life a bit easier in terms of formatting nested quotes:
I think this^ brief discussion of how the two sides might look at the same issue gets at the fundamental problem/non-obviousness of the matter pretty well.
I think a more promising form of suffering-focused ethics would explore some form of "variable value" approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. I'm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
I think we may just have very different background stances on like how to do ethics. I think that we should more strongly decouple the project of abstract object level truth seeking from the project of figuring out a code of norms/rules/de facto ethics that satisfies all our many constraints and preferences today. The thing you propose seems promising to me as like a coalitional bargaining proposal for guiding action in the near future, but not especially promising as a candidate for abstract moral truth.
Thanks, yeah I may have gotten slightly confused when writing.
Wikipedia screenshot:
Let P be the thing I said in the post:
If A ≻ B ≻ C, there's some probability p ∈ (0, 1) where a guaranteed state of the world B is ex ante morally equivalent to "lottery p·A + (1-p)·C”
or, symbolically
and let
I think but not in general.
So my writing was sloppy. Super good catch (not caught by any of the various LLMs iirc!)
But for the purposes of the argument everything holds together because you need independence axiom for VNM to hold. But still, sloppy.
Me: "This arbitrariness diminishes somewhat (though, again, not entirely) when viewed through the asymptotic structure. Once we accept that compensation requirements grow without bound as suffering intensifies, some threshold becomes inevitable. The asymptote must diverge somewhere; debates about exactly where are secondary to recognizing the underlying pattern."
You:
"Grow without bound" just means that for any M, we have f(X) > M for sufficiently large X. This is different from there being a vertical asymptote so a threshold is not inevitable. For instance one could have f(X) = X or f(X) = X^2.
Straightforward error by me, I will change the wording. Not sure how that happened
It would be confusing to call this behavior continuous, because (a) the VNM axiom you reject is called continuity and (b) we are not using any other properties of the extended reals, but we are using real-valued probabilities and x values.
Yeah, idk, English and math only provide so many words. I could have spent more words more driving home and clarifying this point or inventing and defining additional terms. My intuition is that it's clear enough as is (evidently we disagree about this) but if a couple other people say "yeah this is misleading and confusing" then I'll concede that I made a bad choice about clarity vs brevity as a writing decision.
Ngl I am pretty confused about everything starting here. I think I'm just reading you wrong somehow. Like the difference in those magnitudes is huge, point taken, but I don't see why that matters for my argument.
Moving from 10^10^10 to infinity, we would then believe that suffering has a threshold t where t + epsilon intensity suffering cannot be offset by removing t - epsilon intensity suffering
Confused here because yeah clearly adding t+epsilon and removing t-epsilon gives you a net change below zero. But I sense you might be getting at the (very substantive and important) cluster of critiques I respond to in this comment (?)
also need to propose some other mechanism like lexicographic order for how to deal with suffering above the infinite badness threshold.
Yeah I'm ~totally agnostic about this in the post. There are many substantively different possibilities about what the moral world might be like when dealing above that threshold, I agree! Could be distinct levels of lexicality, perhaps some literal integer like 13 levels or perhaps arbitrarily many. Probably other solutions/models as well
Maybe I should just remove/modify the "write down", "compute", and "physically instantiate" bit of rhetorical flourish because it might be doing more harm than good.
(Note that it may take me some time to update the post to reflect sections 1 and 2 in this comment)
Again, sharp eye, thanks for the comment!
I am getting really excellent + thoughtful comments on this (not just saying that) - I will mention here that I have highly variable and overall reduced capacity for personal reasons at the moment, so please forgive me if it takes a little while for me to respond (and in the mean time note that I have read stuff so they're not being ignored) 🙂
Great points both and I agree that the kind of tradeoff/scenario described by @EJT and @bruce in his comment are the strongest/best/most important objections to my view (and the thing most likely to make me change my mind)
Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/update. I think the fundamentals are pretty similar (between EJT and Bruce's examples) even though the exact wording/implementation is not:
A) 70 years of non-offsettable suffering, followed by 1 trillion happy human lives and 1 trillion happy pig lives, or
B) [70 years minus 1 hour of non-offsettable suffering (NOS)], followed by 1 trillion unhappy humans who are living at barely offsettable suffering (BOS), followed by 1 trillion pig lives that are living at the BOS,
You would prefer option B here. And it's not at all obvious to me that we should find this deal more acceptable or intuitive than what I understand is basically an extreme form of the Very Repugnant Conclusion, and I'm not sure you've made a compelling case for this, or that world B contains less relevant suffering.
to which I replied:
Yeah not going to lie this is an important point, I have three semi-competing responses:
- I'm much more confident about the (positive wellbeing + suffering) vs neither trade than intra-suffering trades. It sounds right that something like the tradeoff you describe follows from the most intuitive version of my model, but I'm not actually certain of this; like maybe there is a system that fits within the bounds of the thing I'm arguing for that chooses A instead of B (with no money pumps/very implausible conclusions following)
- Well the question again is "what would the IHE under experiential totalization do?" Insofar as the answer is "A", I endorse that. I want to lean on this type of thinking much more strongly than hyper-systematic quasi-formal inferences about what indirectly follows from my thesis.
I think it's possible that the answer is just B because BOS is just radically qualitatively different from NOS.
- Maybe most importantly I (tentatively?) object to the term "barely" here because under the asymptotic model I suggest, the value of subtracting arbitrarily small amount of suffering instrument from the NOS state results in no change in moral value at all because (to quote myself again) "Working in the extended reals, this is left-continuous: "
- So in order to get BOS, we need to remove something larger than , and now it's a quasi-empirical question of how different that actually feels from the inside. Plausibly the answer is that "BOS" (scare quotes) doesn't actually feel "barely" different - it feels extremely and categorically different
Consider "which of these responses if any is correct" a bit of an open question for me.
Plausibly I should have figured this out before writing/publishing my piece but I've updated nontrivially (though certainly not all the way) towards just being wrong on the metaphysical claim.
This is in part because after thinking some more since my reply to Bruce (and chatting with some LLMs), I've updated away from my points (1) and (2) above.
I am still struggling with (3) both at:
Mostly (2) though, I should add. I think (uncertain/tentative etc etc) that this is conceptually on the table.
So to respond to Ben:
I interpret OP's point about asymptotes to mean that he indeed bites this bullet and believes that the "compensation schedule" is massively higher even when the "instrument" only feels slightly worse?
I don’t bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think it’s a counterexample that more or less disproves my metaphysical claim (if true/legit).
But I feel pretty conflicted right now about whether the small but not infinitesimal change in i_s -> subjectively small difference is true (again, mostly because of quasi-empirical uncertainty).
This is hard to think about largely because my model/view leaves the actual shape of the asymptote unspecified (here’s a new version of the second pic in my post), and that includes all the uncertainty associated with what instrument we are literally or conceptually talking about (since the sole criterion is that it’s monotonic)[1]
I will add that one reason I think this might be a correct “way out” is that it would just be very strange to me if “IHE preference is to refuse 70 year torture and happiness trade mentioned in post” logically entails (maybe with some extremely basic additional assumptions like transitivity) “IHE gives up divine bliss for a very small subjective amount of suffering mitigation”
I know that this could just be a failure of cognition and/or imagination on my part. Tbh this is really the thing that I’m trying to grok/wrestle with (as of now, like for the last day or so, not in the post)
I also know this is ~motivated reasoning, but idk I just do think it has some evidential weight. Hard to justify in explicit terms though.
I’m curious if others have different intuitions about how weird/plausible this [2] is from a very abstract POV
Mostly for fun I vibecoded an API to easily parse EA Forum posts as markdown with full comment details based on post URL (I think helpful mostly for complex/nested comment sections where basic copy and paste doesn't work great)
I have tested it on about three posts and every possible disclaimer applies
Again I appreciate your serious engagement!
The positive argument for the metaphysical claim and the title of this piece relies (IMO) too heavily on a single thought experiment, that I don't think supports the topline claim as written.
Not sure what you mean by the last clause, and to quote myself from above:
I don't expect to convince all readers, but I'd be largely satisfied if someone reads this and says: "You're right about the logic, right about the hidden premise, right about the bridge from IHE preferences to moral facts, but I would personally, both in real life and as an IHE, accept literally anything, including a lifetime of being boiled alive, for sufficient happiness afterward."
Yeah it's possible I should (have) emphasized this specific thesis ("IHE thought experiment, I claim, is an especially epistemically productive way of exploring that territory, and indeed for doing moral philosophy more broadly") more as an explicit claim, distinct from the two I highlight as the organizing/motivating claims corresponding to each section. Maybe I will add a note or something about this.
I don't have a rock solid response to the "too heavily" thing because, idk, I think the thought experiment is actually what matters and what corresponds to the true answer. And I'll add that a background stance I have is that I'm trying to convey what think is the right answer, not only in terms of explicit conclusions but in terms of what evidence matters and such.
A) 70 years of non-offsettable suffering, followed by 1 trillion happy human lives and 1 trillion happy pig lives, or
B) [70 years minus 1 hour of non-offsettable suffering (NOS)], followed by 1 trillion unhappy humans who are living at barely offsettable suffering (BOS), followed by 1 trillion pig lives that are living at the BOSYou would prefer option B here. And it's not at all obvious to me that we should find this deal more acceptable or intuitive than what I understand is basically an extreme form of the Very Repugnant Conclusion, and I'm not sure you've made a compelling case for this.
Yeah not going to lie this is an important point, I have three semi-competing responses:
I think it's possible that the answer is just B because BOS is just radically qualitatively different from NOS.
Consider "which of these responses if any is correct" a bit of an open question for me.
And I'll add that insofar as the answer is (2) and NOT 3, I'm pretty inclined to update towards "I just haven't developed an explicit formalization that handles both the happiness trade case and the intra-suffering trade case yet" more strongly than towards "the whole thing is wrong, suffering is offsetable by positive wellbeing" - after all, I don't think it directly follows from "IHE chooses A" that "IHE would choose the 70 years of torture." But I could be wrong about this! I 100% genuinely think I'm literally not smart enough to intuit super confidently whether or a formalization that chooses both A and no torture exists. I will think about this more!
Thought experiment variations:
People's intuitions about the suffering/bliss trade might reasonably change based on factors like:
- Duration of suffering (70 minutes vs. 70 years vs. 70 billion years)
- Whether experiences happen in series or parallel
- Whether you can transfer the bliss to others
I agree (1) offers interesting variations. I do have a vague, vibey sense that one human lifetime seems like a pretty "fair" central case to start from but this is not well-justified.
I more strongly want to push back on (2) and (3) in the sense that I think parallel experience, while probably conceptually fine in principle, really greatly degrades the epistemic virtue of the thought experiment because this literally isn't something human brains were/are designed to do or simulate. And likewise with (3), the self interest bit seems pretty epistemically important.
- Threshold problem:
Formalizing where the lexical threshold sits is IMO pretty important, because there are reasonable pushbacks to both, but they feel like meaningfully different views
- High threshold (e.g.,"worst torture") leads to unintuitive package deals where you'd accept vast amounts of barely-offsettable suffering (BOS) to avoid small amounts of suffering that does cross the threshold
- Low threshold (e.g., "broken hip" or "shrimp suffering") seems like it functionally becomes negative utilitarianism
I agree it is imporant! Someone should figure out the right answer! Also in terms of practical implementation, probably better to model as a probability distribution than a single certain line.
Asymptotic compensation schedule:
The claim that compensation requirements grow asymptotically (rather than linearly, or some other way) isn't well-justified, and doesn't seem to meaningfully change the unintuitive nature of the tradeoffs your view is willing to endorse.
I disagree that it isn't well-justified in principlle, but maybe I should have argued this more thoroughly. It just makes a ton of intuitive sense to me but possibly I am typical-minding. And I'm pretty sure you're wrong about the second thing - see point 3 a few bullets up. It seems radically less plausible to me that the true nature of ethics involves discontinuous i_s vs i_h compensation schedules.
Ok lol your comment is pretty long so I think I will need to revisit the rest of it! Some vibes are likely to include:
I suspect this intended to be illustrative, but I would be surprised if there were many, if any standard utilitarians who would actually say that you need TREE(3)[5] flourishing human life years to offset a cluster headache lasting 1 hour, so this seems like a strawman?
I made an audio version:
Also: Copy to clipboard as markdown link for LLM stuff
I disagree but think I know what you're getting at and am sympathetic. I made the following to try to illustrate and might add it in to the post if it seems clarifying
I made it on a whim just now without thinking too hard so don't necessarily consider the graphical representation on as solid footing as the stuff in the post