Hide table of contents

Disclaimer: I am not a professional moral philosopher. I have only taken a number of philosophy courses in undergrad, read a bunch of books, and thought a lot about these questions. My ideas are just ideas, and I don’t claim at all to have discovered any version of the One True Morality. I also assume moral realism and moral universalism for reasons that should become obvious in the next sections.


While in recent years the CEA and the leadership of EA have emphasized that they do not endorse any particular moral philosophy over any other, the reality is that, as per the last EA Survey that checked, a large majority of EAs lean towards Utilitarianism as their guiding morality.

Between that and the recent concerns about issues with the “naïve” Utilitarianism of SBF, I thought it might be worthwhile to offer some of my philosophical hacks or modifications to Utilitarianism that I think enable it to be more intuitive, practical, and less prone to many of the apparent problems people seem to have with the classical implementation.

This consists of two primary modifications: setting utility to be Eudaimonia, and using Kantian priors. Note that these modifications are essentially independent of each other, and so you can incorporate one or the other separately rather than taking them together.

Eudaimonic Utilitarianism

The notion of Eudaimonia is an old one that stems from the Greek philosophical tradition. In particular, it was popularized by Aristotle, who formulated it as a kind of “human flourishing” (though I think it applies to animals as well) and associated it with happiness and the ultimate good (the “summum bonum”). It’s also commonly thought of as objective well-being.

Compared with subjective happiness, Eudaimonia attempts to capture a more objective state of existence. I tend to think of it as the happiness you would feel about yourself if you had perfect information and knew what was actually going on in the world. It is similar to the concept of Coherent Extrapolated Volition that Eliezer Yudkowsky used to espouse a lot. The state of Eudaimonia is like reaching your full potential as a sentient being with agency, rather than a passive emotional experience like with happiness.

So, why Eudaimonia? The logic of using Eudaimonia rather than mere happiness as the utility to be optimized is that it connects more directly with the authentic truth, which can be desirable to avoid the following intuitively problematic scenarios:

  • The Experience Machine – Plugging into a machine that causes you to experience the life you most desire, but it’s all a simulation and you aren’t actually doing anything meaningful.
  • Wireheading – Continuous direct electrical stimulation of the pleasure centres of the brain.
  • The Utilitronium Shockwave – Converting all matter in the universe into densely packed computational matter that simulates many, many sentient beings in unimaginable bliss.

Essentially, these are all scenarios where happiness is seemingly maximized, but at the expense of something more than we also value, like truth or agency. Eudaimonia, by to an extent capturing this more complex value alongside happiness, allows us to escape these intellectual traps.

I’ll further elaborate with an example. Imagine a mathematician who is brilliant, but also gets by far the most enjoyment out of life from counting blades of grass. But, by doing so, they are objectively wasting their potential as a mathematician to discover interesting things. A hedonistic or preference Utilitarian view would likely argue that their happiness from counting the blades of grass is what matters. A Eudaimonic Utilitarian on the other hand would see this as a waste of potential compared to the flourishing life that they could otherwise have lived.

Another example, again with our mathematician friend, is where there are two scenarios:

  • They discover a great mathematical theorem, but do not ever realize this, such that it is only discovered by others after their death. They die sad, but in effect, a beautiful tragedy.
  • They believe they have discovered a great mathematical theorem, but in reality it is false, and they never learn the truth of the matter. They die happy, but in a state of delusion.

Again, classical Utilitarianism would generally prefer the latter, while Eudaimonic Utilitarianism prefers the former.

Yet another example might be the case of Secret Adultery. A naïve classical Utilitarian might argue that committing adultery in secret, assuming it can never be found out, adds more hedons to the world than doing nothing, and so is good. A Eudaimonic Utilitarian argues that what you don’t know can still hurt your Eudaimonia, that if the partner had perfect information and knew about the adultery, they would feel greatly betrayed and so objectively, Eudaimonic utility is not maximized.

A final example is that of the Surprise Birthday Lie. Given Eudaimonic Utilitarianism seems very high on maintaining the truth, you might assume that it would be against lying to protect the surprise of a surprise birthday party. However, if the target of this surprise knew that people were lying so as to bring about a wonderful surprise for them, they would likely consent to these lies and prefer this to discovering the secret too soon and ruining the surprise. Thus, in this case Eudaimonic Utilitarianism implies that certain white lies can still be good.

Kantian Priors

This talk of truth and lies brings me to my other modification, Kantian Priors. Kant himself argued that truth telling was always right and lying was always wrong, so you might think that Kantianism would be completely incompatible with Utilitarianism. Even if you think Kant was wrong overall about morality, he did contribute some useful ideas that we can utilize. In particular, the categorical imperative in its form of doing only what can be universalized, is an interesting way to establish priors.

By priors, I refer to the Bayesian probabilistic notion of prior beliefs that are based on our previous experience and understanding of the world. When we make decisions with new information in a Bayesian framework, we update our priors with the new evidence to create our posterior beliefs, which we use to make the final decision with.

Kant argued that we don’t know the consequences of our actions, so we should not bother to figure them out. This was admittedly rather simplistic, but the reality is that frequently there is grave uncertainty about the actual consequences of our actions, and predictions made even with the best knowledge are often wrong. In that sense, it is useful to try to adopt a Bayesian methodology to our moral practice, to help us deal with this practical uncertainty.

Thus, we establish priors for our moral policies, essentially default positions that we start from whenever we try to make moral decisions. For instance, in general, lying if universalized would lead to a total breakdown in trust and is thus contradictory. This implies a strong prior towards truth telling in most circumstances.

This truth telling prior is not an absolute rule. If there is strong enough evidence to suggest that it is not the best course of action, Kantian Priors allows us the flexibility to override the default. For instance, if we know the Nazis at our door asking if we are hiding Jews in our basement are up to no good, we can safely decide that lying to them is a justified exception.

Note that we do not have to necessarily base our priors on Kantian reasoning. We could also potentially choose some other roughly deontological system. Christian Priors, or Virtue Ethical Character Priors, are also possible if you are more partial to those systems of thought. The point is to have principled default positions as our baseline. I use Kantian Priors because I find the universalizability criterion to be an especially logical and consistent method for constructing sensible priors.

An interesting usefulness of having priors in our morality is that it causes us to have some of the advantages of deontology without the normal tradeoffs. Many people tend to trust those who have more deontological moralities because they are very reliably consistent with their rules and behaviour. Someone who never lies is quite trustworthy, while someone who frequently lies because they think the ends justifies the means, is not. Someone with deontic priors on the other hand isn’t so rigid as to be blind to changing circumstances, but also isn’t so slippery that you could worry if they’re trying to manipulate you into doing what they think is good.

This idea of priors is similar to Two-Level Utilitarianism, but formulated differently. In Two-Level Utilitarianism, most of the time you follow rules, and sometimes, when the rules conflict or when there’s a peculiar situation that suggests you shouldn’t follow the rules, you calculate the actual consequences. With priors it’s about if you receive strong evidence that can affect your posterior beliefs, and move you to temporarily break from your normal policies.


Classical Utilitarianism is a good system that captures a lot of valuable moral insights, but its practice by naïve Utilitarians can leave something to be desired, due to perplexing edge cases, and a tendency to be able to justify just about anything with it. I offer two possible practical modifications that I hope allow for an integration of some of the insights of deontology and virtue ethics, and create a form of Utilitarianism that is more robust to the complexities of the real world.

I thus offer these ideas with the particular hope that such things as Kantian Priors can act as guardrails for your Utilitarianism against the temptations that appear to have been the ultimate downfall of people like SBF (assuming he was a Utilitarian in good faith of course).

Ultimately, it is up to you how you end up building your worldview and your moral centre. The challenge to behave morally in a world like ours is not easy, given vast uncertainties about what is right and the perverse incentives working against us. Nevertheless, I think it’s commendable to want to do the right thing and be moral, and so I have suggested ways in which one might be able to pursue such practical ethics.





More posts like this

Sorted by Click to highlight new comments since:

Nice post! An eudaemonic focus pairs nicely with a capabilities approach to human welfare - where we might conceive of global health and development as enabling individuals' substantive freedom to lead the lives they wish to. Ryan Briggs gives a great intro here.

Executive summary: The post proposes modifications to utilitarianism to make it more intuitive and robust in practice, specifically by using eudaimonia (human flourishing) as the utility to maximize and Kantian priors as default rules that can be overridden given sufficient evidence.

Key points:

  1. Eudaimonia, meaning objective well-being or human flourishing, is proposed as the utility to maximize rather than just happiness. This avoids problematic scenarios like experience machines.
  2. Kantian priors, or principled default positions, are suggested to provide a framework for decision making amidst uncertainty. These can be overridden given enough evidence.
  3. The modifications integrate insights from virtue ethics and deontology to make utilitarianism more consistent and trustworthy in practice.
  4. Examples are given for how eudaimonic utilitarianism and Kantian priors give different and arguably more intuitive conclusions in situations like secret adultery or lying.
  5. The ideas are meant as practical ways to pursue ethics given the complexities of the real world and avoid pitfalls like those affecting Sam Bankman-Fried.



This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities