Standard expected utility theory (EUT) assumes moral certainty, but also embeds epistemic/ontological uncertainty about the state of the world that may occur as a result of our actions. Harsanyi expected utility theory (HEUT) allows us to assign probabilities to our potential moral viewpoints, and thus gives us a mechanism by which to handle moral uncertainty.

Unfortunately, there are several problems with EUT and HEUT. First, the St. Petersburg paradox shows that unbounded utility valuations can justify almost any action, even if the probability of a good outcome is almost zero. For example, a banker may be in a situation where the probability of a bank run is nearly one, but because potential returns of being overleveraged in a near zero probability world are so high, the banker may foolishly still choose to be overleveraged to maximize expected utility. Second, diminishing returns typically force us to produce or consume more in order to realize the same amounts of utility; this is usually a recipe for us to consume and produce in unsustainable ways. Third, as Herbert Simon noted, optimizing expected utility is often computationally intractable.  

A response of early effective altruism research to these problems was maxipok (i.e., maximizing probabilities of an okay outcome). Under this construct, constraints of an okay outcome are identified, a probability of satisfying those constraints is assigned to each action, and the action that maximizes the probability of satisfying the constraints is adopted.

The problem with maxipok is that it assumes moral certainty about the constraints of what constitutes an okay outcome. For example, if we believe a trolley problem is inevitable, one might infer it is an okay outcome for someone to die, given its unavoidability. On the other hand, if a trolley problem is avoidable, one may infer that someone dying is not okay. Thus in that overall scenario, what constitutes an okay outcome is contingent on what probabilities we assign to the inevitability of a trolley problem.

Success maximization is a mechanism by which to generalize maxipok for moral uncertainty. Let ai be an action i from the set of actions = {a1, a2, …, am}. Let sx be a definition of moral success, namely x, from S = {s1, s2, …, sn}. The probability π that i satisfies the constraints of sx is 0 ≤ πi(sx) ≤ 1. Let p(sx) be the estimated probability that x is the correct definition of moral success, where p(s1) + p(s2) + … + p(sn) = 1. Thus, the expected success of action i is 0 ≤ πi(s1)p(s1) + πi(s2)p(s2) + … + πi(sn)p(sn) ≤ 1. A success maximizing agent will choose an action aj є A such that πj(s1)p(s1) + πj(s2)p(s2) + … + πj(sn)p(sn) ≥πi(s1)p(s1) + πi(s2)p(s2) + … + πi(sn)p(sn) for all ai є A where ij.  

Success maximization resolves many of the problems of von Neumann-Morgenstern and Harsanyi expected utility theories. First, because success valuations are bounded between 0 and 1, it is much less likely we will encounter St. Petersburg paradox situations where any action is justified by extremely high utility valuations despite near zero probabilities of occurrence. Second, unsustainable behaviors produced by chasing diminishing returns is much less likely in the world of maximizing probabilities of constraint satisfaction than it is in the world of maximizing unbounded expected utilities. Third, because probabilities of success are bounded between zero and one, terms of the linear combination (where p(sx) is relatively low) can often be ignored to make for quicker calculations, making calculations more tractable.  

Comments3


Sorted by Click to highlight new comments since:

If I understand you correctly, what you're proposing is essentially a subset of classical decision theory with bounded utility functions. Recall that, under classical decision theory, we choose our action according to where is a random state of nature and an action space.

Suppose there are (infinitely many works too) moral theories , each with probability and associated utility . Then we can define This step gives us (moral) uncertainty in our utility function.

Then, as far as I understand you, you want to define some component utility functions as As then is the probability of an acceptable outcome under . And since we're taking the expected value of these bounded component utilities to construct , we're in classical bounded utility function land.

That said, I believe that

  1. This post would benefit from a rewrite of the paragraph starting with "Success maximization is a mechanism by which to generalize maxipok". It states " Let be an action from the set of actions . " Is and action, and action, or both? I also don't understand what is. Are there states of nature in this framework? You say that is a moral theory, so it cannot be ?
  2. You should add concrete examples. If you add one or two it might become easier to understand what you're doing despite the formal definition not being 100% clear.

Speaking as a non-expert: This is an interesting idea, but I'm confused as to how seriously I should take it. I'd be curious to hear:

  1. Your epistemic status on this formalism. My guess is you're at "seems like a good cool idea; others should explore this more", but maybe you want to make a stronger statement, in which case I'd want to see...
  2. Examples! Either a) examples of this approach working well, especially handling weird cases that other approaches would fail at. Or, conversely, b) examples of this approach leading to unfortunate edge cases that suggest directions for further work.

I'm also curious if you've thought about the parliamentary  approach to moral uncertainty, as proposed by some FHI folks. I'm guessing there are good reasons they've pushed in that direction rather than more straightforward "maxipok with p(theory is true)", which makes me think (outside-view) that there are probably some snarls one would run into here. 

Inside-view, some possible tangles this model could run into:

  • Some theories care about the morality of actions rather than states. But I guess you can incorporate that into 'states' if the history of your actions is included in the world-state -- it just makes things a bit harder to compute in practice, and means you need to track "which actions I've taken that might be morally meaningful-in-themselves according to some of my moral theories." (Which doesn't sound crazy, actually!)
  •  the obvious one: setting boundaries on "okay" states is non-obvious, and is basically arbitrary for some moral theories. And depending on where the boundaries are set for each theory, theories could increase or decrease in influence on one's actions. How should we think about okayness boundaries? 
    • One potential desideratum is something like "honest baragaining." Imagine each moral theory as an agent that sets its "okayness level" independent of the others, and acts to maximize good from its POV. Then the our formalism should  lead to each agent being incentivized to report its true views.  (I think this is a useful goal in practice, since I often do something like weighing considerations by taking turns inhabiting different moral views). 
      • I think this kind of thinking naturally leads to moral parliament models -- I haven't actually read the relevant FHI work, but I imagine it says a bunch of useful things, e.g. about using some equivalent of quadratic voting between theories. 
    • I think there's an unfortunate tradeoff here, where you either have arbitrary okayness levels or all the complexity of nuanced evaluations. But in practice maybe success maximization could function as the lower level heuristic (or middle level, between easier heuristics and pure act-utilitarianism) of a multi-level utilitarianism approach.
Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op