Hide table of contents
2 min read 11

15

Total vs average utilitarianism

Say you have one utility (think of utility as happiness points) and you have to choose between either creating another person that has two utility, or increasing your own utility to two.

Circles are people. Numbers are the amount of utility. Arrows are options

A total utilitarian would choose the first option. It's the one that generates the highest amount of total utility in the next tick (the next moment in time). An average utilitarian would choose the second option since it generates the highest amount of average utility in the next tick.
Here these two theories argue for how we should aggregate utility for the next moment in time. But how should we aggregate utility over a period of time?
 

Introducing timeline utilitarianism

Say you are about to die and you have two utility. You have to choose between either just dying, or dying while creating another person that has one utility.

Grey cross is non-existence

Let's look at how a timeline can aggregate the total amount of utility.
Timeline A has two ticks. You could aggregate the total amount of utility of this timeline by simply adding the two ticks together (2+1). Let's call this method of aggregating "total timeline utilitarianism". Here we can see that timeline A would be a better choice than timeline B since timeline B only has one tick and therefore only two utility.

You could also aggregate the utility of timeline A by taking the average of the two ticks ((2+1)÷2). Let's call this method of aggregating "average timeline utilitarianism". Here we can see that timeline B would be a better choice than timeline A since timeline B only has one tick and therefore two utility.
 

Combining moment and timeline utilitarianism

So we have "moment utilitarianism" to look at moments in time and "timeline utilitarianism" to look at the entire timeline. What happens if we combine them? Let's introduce some terms. In "total average utilitarianism" the "total" refers to how we should aggregate the entire timeline. The "average" refers to how we should aggregate the individual moments. I will always mention the timeline aggregation first and the moment aggregation second. There are four different combinations that all make different claims about how we should act.

If we want to maximize total total utility we should choose timeline A. If we want maximize average total utility we should choose timeline B. If we want to maximize total average utility we should choose timeline C. If we want to maximize average average utility we should choose timeline D.
Usually when people talk about different types of utilitarianism they automatically presuppose "total timeline utilitarianism". In fact, the current debate between total and average utilitarianism is actually a debate between "total total utilitarianism" and "total average utilitarianism". I hope this post has pointed out that this assumption isn't the only option.
 

Wrapping up

In reality we have many more options to choose from and we will have to do complicated probability calculations under uncertainty instead of following a simple decision tree. Some might argue that non-existence should count as zero utility. Some might argue for more exotic forms of utilitarianism like median or mode utilitarianism (I hope you don't spend too much time fretting over which of these options is the "correct" form of utilitarianism and adopt something like meta-preference utilitarianism instead). This is just a simplified model to introduce the concept of timeline utilitarianism. In future posts I will expand on this concept and explore how it interacts with things like hingeyness and choice under uncertainty.

15

0
0

Reactions

0
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

Interesting idea!

In light of the relativity of simultaneity, that whether A happens before or after B can depend on your reference frame, you might have to just choose a reference frame or somehow aggregate over multiple reference frames (and there may be no principled way to do so). If you just choose your own reference frame, your theory becomes agent-relative, and may lead to disagreement about what's right between people with the same moral and empirical beliefs, except for the fact they're choosing different reference frames.

Maybe the point was mostly illustrative, but I'd lean against using any kind of average (including mean, median, etc.) without special care for negative cases. If the average is negative, you can improve it by adding negative lives, as long as they're better than the average.

Thanks for the post. Coincidentally, I was thinking about how I have a strong moral preference for a longer timeline when I saw it.
I feel attracted by total total utilitarianism, but suppose we have N individuals, each living 80y, with the same constant utility U. Now, these individuals can either live more concentrated (say, in 100y) or more scattered (say, in 10000y) in time; I strongly prefer the latter (I'd pay some utility for it) - even though it runs against any notion of (pure) temporal discount. My intuition (though I don't trust it) is that, from the "point of view of nowhere", at some point, length may trump population; but maybe it's just some ad hoc influence of a strong bias against extinction.
Please, let me know about any source discussing this (I admit I didn't search enough for it).

Please, let me know about any source discussing this.

If with "this" you mean timeline utilitarianism,  then there isn't one unfortunately (I haven't published this idea anywhere else yet). Once I've finished university I hope some EA institution will hire me to do research into descriptive population ethics. So hopefully I can provide you with some data on our intuitions about timelines in a couple years.

I suspect that people more concerned with the quality of life will tend to favor average timeline utilitarianism, and all the people in this community that are so focused on x-risk and life-extension might be a minority with their stronger preference for the quantity of life (anti-deathism is the natural consequence of being a strong total timeline utilitarian).
If you want to read something similar to this then you could always check out the wider literature surrounding population ethics in general.

Personally, I've always understood total utilitarianism to already be across both time and space, as it is often contrasted not just with average utilitarianism but with person-affecting/prior existence views.

Yes, (total) total utilitarianism is both across time and space, but you can aggregate across time and space in many different ways. E.g median total utilitarianism is also both across time and space, but it aggregates very differently.

Right, I guess what I mean is that in an EA context, I've historically understood total utilitarianism to be total (an integral) across both time and space, rather than total in one dimension but not the other.

I think so too, because you can't really talk about ethics without a timeframe. I wasn't trying to argue that people don't use timeframes, but rather that people automatically use total timeline utilitarianism without realizing that other options are even possible. This was what I was trying to get at by saying:

Usually when people talk about different types of utilitarianism they automatically presuppose "total timeline utilitarianism". In fact, the current debate between total and average utilitarianism is actually a debate between "total total utilitarianism" and "total average utilitarianism".

Got it, I must have just misread your post then! :) Thanks for your patience in the clarification!

The question of how to aggregate over time may even have important consequences for population ethics paradoxes. You might be interested in reading Vanessa Kosoy's theory here in which she sums an individual's utility over time with an increasing penalty over life-span. Although I'm not clear on the justification for these choices, the consequences may be appealing to many: Vanessa, herself, emphasizes the consequences on evaluating astronomical waste and factory farming.

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that are atomized. For example, since “You pull the lever”, “The five live”, and “The one dies” are all elements of , you can string these into a larger statement that is also in : “You pull the lever, and the five live, and the one dies”. Therefore, each timeline contains a very large statement that uniquely identifies it within any finite subset of . However, timelines won’t be our unit of analysis because the statements they contain have no subjective empirical uncertainty.

This uncertainty can be incorporated by using a probability distribution of timelines, which we’ll call a forecast (). Though there is no uncertainty in the trolley problem, we could still represent it as a choice between two forecasts: guarantees (the pull-the-lever timeline) and guarantees (the no-action timeline). Since each timeline contains a statement that uniquely identifies it, each forecast can, like timelines, be represented as a set of statements. Each statement within a forecast is an empirical prediction. For example, would contain “The five live with a credence of 1”. So, the trolley problem reveals that you either morally prefer (denoted as ), prefer (denoted as ), or you believe that both forecasts are morally equivalent (denoted as ).

(In light/practice of advice I've read to just go ahead and comment without always trying to write something super substantive/eloquent, I'll say that) I'm definitely interested in this idea and in evaluating it further, especially since I'm not sure I really thought about this in an explicit way before (since I generally just think "average per each person/entity's aggregate [over time] vs. sum aggregate of all entities," without focusing that much on a distinction between an entity's aggregate over time and that same entity's average over time). Such an approach might have particular relevance under models that take a less unitary/consistent view of human consciousness. I'll have to leave this open and come back to it with a fresh/rested mind, but for now I think it's worth an upvote for at least making me recognize that I may not have considered a question like this before.

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 6m read
 · 
This is a crosspost from my new Substack Power and Priorities where I’ll be posting about power grabs, AI governance strategy, and prioritization, as well as some more general thoughts on doing useful things.  Tl;dr I argue that maintaining nonpartisan norms on the EA Forum, in public communications by influential community members, and in funding decisions may be more costly than people realize. Lack of discussion in public means that people don’t take political issues as seriously as they should, research which depends on understanding the political situation doesn’t get done, and the community moves forward with a poor model of probably the most consequential actor in the world for any given cause area - the US government. Importantly, I don’t mean to say most community members shouldn’t maintain studious nonpartisanship! I merely want to argue that we should be aware of the downsides and do what we can to mitigate them.    Why nonpartisan norms in EA are a big deal Individual politicians (not naming names) are likely the most important single actors affecting the governance of AI. The same goes for most of the cause areas EAs care about. While many prominent EAs think political issues may be a top priority, and politics is discussed somewhat behind closed doors, there is almost no public discussion of politics. I argue the community’s lack of a public conversation about the likely impacts of these political actors and what to do in response to them creates large costs for how the community thinks about and addresses important issues (i.e. self-censorship matters actually). Some of these costs include:  * Perceived unimportance: I suspect a common, often subconscious, thought is, 'no prominent EAs are talking about politics publicly so it's probably not as big of a deal as it seems'. Lack of public conversation means social permission is never granted to discuss the issue as a top priority, it means the topic comes up less & so is thought about less, and i
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely