Hide table of contents

This post is a bit rougher than I would otherwise have posted, but I figured it was worth getting it out this week while more people are thinking about the “$100m to on animal welfare vs. global health” question. I’m not taking an overall stand on the question here, but the argument of this post is relevant to it. Feedback of all kinds very welcome.

Introduction

Let’s call an experience the bundle of qualia felt by one subject at once. That is, in the typical human case, an experience is a bundle that includes a field of vision, a field of feelings of pressure on the body, an internal monologue, and so on, all lasting for one moment.[1]

Let’s say a hedonic theory of welfare maintains something like the following:

i) that each experience comes with some (positive, negative, or zero) hedonic intensity,[2] capturing how intensely good or bad it feels;
ii) that each experience comes with some amount (positive, negative, or zero) of welfare; and
iii) that, at least some other things being equal, the welfare is proportional to the hedonic intensity.

Comparing the (hedonic) welfare of different experiences can be difficult when the experiences are very different from each other, as for instance when they belong to different species, but the premise is that it is possible in principle.

It seems to me that people often take a hedonic theory of welfare to mean a theory that maintains, instead of (iii),

iii’): that welfare is proportional to hedonic intensity full stop.

I agree that, for a theory of welfare to count as hedonic, it must not hold that the welfare of an experience depends on features of it that have nothing to do with “how much good feeling it contains”. Certainly we don’t want to call “hedonic” the theory that the welfare of an experience is, say, the product of (a) its hedonic intensity and (b) the amount of respect future generations will have for its subject. Nevertheless, I would argue that there is a dimension along which experiences can differ, other than hedonic intensity, which a thoroughly hedonic theory of welfare can take into account, and indeed which any sensible such theory must take into account.

I call this dimension size. The claim will be that even if two experiences consist of equally intense pleasure or pain, there is a natural sense in which one can be bigger than the other, in which case the welfare of the bigger one is more positive or more negative. I’ll argue that the sense in which one experience can have more welfare than another due to its size, even if the two have equal hedonic intensities, is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size, even if the two consist of people all of whom are feeling the same thing. I’ll then argue that the tendency to neglect size and (insofar as we are entertaining hedonism) to think of an experience’s welfare as proportional only to its hedonic intensity is a common tendency in both EA and academic philosophy circles, and has probably led a lot of people astray on the question of interspecies welfare comparisons, as well as some other less significant questions.

I’m not nearly as familiar with the literatures on neuroscience, philosophy of mind, or theories of welfare as would be ideal here. I’m writing this up anyway because to my surprise, most of the philosophers (primarily philosophers of mind) with whom I discussed the idea seemed to think it had a reasonable chance of containing something worthwhile and novel.

Defining experience size

Analogy to the visual field

The “visual field” is the space in which we see the things we see. I think we can get an intuition for what it would mean to have a smaller visual field by closing one eye. The experiment isn’t perfect, but it is suggestive. On the one hand, at least to some extent, what happens when I close an eye is that part of the visual field is replaced by darkness—the same shimmering darkness I see when I close both eyes—instead of truly vanishing. On the other hand, it is so hard to devote much attention to that darkness while the other eye is open that it seems reasonable to say that to some extent the visual field really shrinks, or at least gives us a clue as to how it would feel if it had.

I don’t think it should be too controversial that one subject can have a larger visual field than another. If nothing else, someone with only one seeing eye presumably has a smaller visual field than someone with two. If a surgery (say, performed with her eyes open) brings sight to her blind eye, it seems safe to say her field of vision expands. The moment after the relevant nerves are reconnected, she can see everything she could see a moment before and then some. To be sure, her visual field does not double: the things each eye can see overlap, which is why having two eyes also helps with depth perception. But it still expands. I’m not aware of any reason to insist that in any sense the things she had seen all along, via the first seeing eye, “shrink” so that the “area of the visual field stays constant”. It doesn’t seem that the things we see with one open eye shrink when we open the other eye.

And in any event, blind people, or conscious creatures of species altogether incapable of sight, presumably have no visual field. I have no analog to the “sonar field” of a bat.

Some, e.g. Bayne (2010), have used the term “phenomenal field” to refer to the space in which we experience all we experience. It is analogous to the visual field, but extended to include sounds, sensations throughout the body, and so on. If the visual field is a part (or dimension) of the phenomenal field, and if we accept that one subject can have a larger visual field than another and an identical phenomenal field in other respects, it is almost a matter of logic that one subject can have a larger phenomenal field than another. Less incontrovertibly, but I think still pretty compellingly, it also seems reasonable to expect by analogy that one subject can have a larger phenomenal field than another in dimensions other than sight.

Analogy to the field of bodily sensations

Gaining fat or muscle increases the surface area of one’s body, but it doesn’t change one’s neurology. It doesn’t change the number of sensory neurons or the brain architecture that processes the signals they send. So it seems most natural to suppose that, at least after controlling for any possible changes in, say, distance of sensory neurons from the surface of the skin, subjecting one’s arms to a moderately unpleasant experience, like an ice bath, produces about as much discomfort after the gain in fat or muscle as it would have done before.

But consider someone with an amputated arm. Suppose that the arm has recently been amputated, so that there has not been time for the brain to adjust to the loss of the limb in any way, and suppose that the amputee doesn’t suffer from phantom limb syndrome. Like the one-eyed person with respect to the visual field, it seems fair to say that the amputee has a smaller phenomenal field when it comes to bodily sensations.

Split brain cases

The corpus callosum is the bundle of nerve fibers connecting the two hemispheres of the brain. If someone has a corpus callosotomy—if their corpus callosum is cut—there is strong evidence that the hemispheres go on to produce two streams of experiences. One stream feels only what goes on in the left half of the body, sees only the left half of the field of vision, determines only what the left hand does, and so on. The other one feels and controls only the right.

The discussions above about the visual field and field of bodily sensations suggest to me that, to a first approximation, the phenomenal field associated with each severed hemisphere extends half as far as that associated with an entire brain with its corpus callosum intact. Or, if you like, that the experiences in the two streams are about half as big.[3]

Welfare as an aggregate within an experience

The welfare of the whole experience depends on the parts

In the introduction, I said that a hedonic theory of welfare maintains something like

i) that each experience comes with some (positive, negative, or zero) hedonic intensity, capturing how intensely good or bad it feels, and
iii) that, at least [size] being equal, the welfare of each experience is [(ii): defined and] proportional to its hedonic intensity.

We can now be a bit more precise. It seems to me that a sensible hedonic theory of welfare should maintain

i*) that, in some sense or another, various parts of the phenomenal field each come with some distinct hedonic intensity; and
iii*) that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field.[4]

To defend (i*), observe that it would at minimum be a big break from natural language to say that hedonic intensity is only ever a property of an experience as a whole. This would mean that when you have an ache in both arms, the presence of the second ache makes things feel worse only by making “overall pain more intense”. But if a doctor asks “how intense is the pain?”, you don’t have to respond with some sort of average capturing how bad you feel overall. It is natural to say things like “the pain is intense here but mild there”.

To defend the aggregation part of (iii*), again, compare an experience with two aching arms to an otherwise identical experience with an aching left arm and a right arm that feels fine. The first feels worse overall, and the reason why is about the “size” of the pain, not the intensity. We can stipulate that the maximal intensity of the discomfort across the phenomenal field in the first experience equals that in the second experience, but the first is still worse. When one ache disappears, we can tell the doctor “I feel a bit better now because the pain is gone here, though it’s just as bad there as before”.

Finally, the monotonicity part of (iii*) doesn’t seem very controversial either. It doesn’t establish that the right (or the intuitive) way to do the aggregation necessarily looks anything like a sum or integral across parts of the field. Maybe we want to say that having two equally bad arm aches lowers a person’s welfare only 1.5x as much as having one, relative to the baseline in which there is no positive or negative feeling in either arm. But surely we want to say that having two arm aches is worse than having one.

Put aside whether the whole depends only on the parts

To be clear, I’m not proposing that the hedonic intensity associated with one part of the phenomenal field is independent of what’s going on in the rest of the field. For instance, I’m n0t claiming that the discomfort of a given nerve stimulation in one arm is independent of what’s going on with the other arm. Maybe the presence of an ache in one arm dulls or sharpens our ability to feel the ache in the other arm. If so, then to go from feeling an ache in one arm to feeling an equally intense ache in two arms, the nerves in both will have to be stimulated more or less than the nerves in the single aching arm were being stimulated. On introspection, it seems to me that this isn’t much of an issue when the aches are mild—we can perceive each one clearly enough that one’s intensity isn’t affected much by the other—but that it may well be an important complication when the aches are severe.

A more extreme case of interactions across the field would be the perception of visual beauty. I certainly understand that some sight may be beautiful not because any particular small part of the field of vision is beautiful but because of how the parts come together. Nevertheless, I don’t think this necessarily contradicts the view that, in some fundamental sense, it is the points on the visual field that come with various amounts of positive hedonic intensity. We just need to remember that these amounts can be strongly influenced by the contents of the rest of the visual field.

By analogy, consider a song. At each instant you hear only one chord, but how good it feels to hear that chord strongly depends on what chords you remember, at that instant, having heard recently. It at least doesn’t seem crazy (and I expect it’s true) to say that the overall welfare I attain from listening to a song is a monotonic aggregation of the welfare of all my experiences while listening to the song, and maybe some experiences thinking about it afterward.

All that said, even if you think that whole experiences can feel good or bad in some ways that don’t reduce to good or bad feelings in particular parts of the phenomenal field, you don’t have to reject (i*) or (iii*). Unless you go so far as to say that no two parts of the field can come with distinct hedonic intensities, so that even in the case of the aching arms your responses to the doctor are mistaken, you accept (i*). Unless you think that the welfare of an experience overall has nothing to do with the hedonic intensities of any parts which do happen to have hedonic intensities (or that the former can go up when the latter go down, all else equal), you accept (iii*).

Adding and removing parts

The “monotonicity” claim is essentially that if you make one part of the phenomenal field feel better and don’t make anything worse, the welfare of the experience goes up. This is analogous to a Pareto criterion at the population level, which in the context of a hedonic theory of population welfare would be the proposition that if you make someone feel better and don’t make anyone feel worse, the welfare of the population goes up.

This leaves open the question of what happens when you expand or shrink the phenomenal field. If we accept that this is possible, as argued in the previous section, one plausible position would be that if you expand someone’s phenomenal field to include new parts with positive hedonic intensity, you increase his welfare. This would be analogous to the benign addition principle at the population level. It likewise seems natural to say that if you shrink someone’s field to exclude parts which had had negative hedonic intensity—e.g. if you amputate or anesthetize an agonizing limb, as is often done—you increase his welfare.

Ethical implications

Immediate ethical implications

Some ethical implications follow more or less immediately from (a) a view about aggregation within an experience in conjunction with (b) a view about aggregation across experiences. (Or, to break the second kind of aggregation in two: (b1) a view about aggregation across experiences within a single being’s life and (b2) a view about aggregation across beings.) To take a simple view and the one I’m most sympathetic to, consider

  • utilitarianism, the view that we ought to take the act that maximizes welfare;
  • the totalist view of population welfare, i.e. that the welfare of a (finite) population (over some finite period of time) is the the sum of the welfare of each experience in it (or perhaps integrating instead of summing the welfare of experience streams across time, if the flow of experience is continuous); and
  • what might be called the “total hedonic view of an experience’s welfare”, i.e. that welfare of an experience reduces to the sum of the hedonic intensities of individual atomic parts of the experience that are in some sense equally sized (or perhaps to, in some sense, the integral of hedonic intensity across the experience).

Then:

  • If the experiences associated with split brains are half as big but twice as numerous as the experiences associated with intact brains, we ought to treat them the same in many cases. For instance, we should be indifferent between (a) putting two arms attached to an intact brain in an uncomfortable ice bath and (b) putting two arms attached to a split brain in the same ice bath.[5]
  • Likewise, if the experiences of amputees are some proportion smaller than those of non-amputees (with respect to the field of bodily sensations), then they warrant proportionally less moral concern (when it comes to actions that affect the whole field of bodily sensations but nothing else). We should prefer to put the whole body of an amputee in the ice bath than to put the whole body of a non-amputee.[6]
  • At least in principle, different species may all be conscious, and all have the same range of capacities for hedonic intensity, but have very differently sized experiences. If so, they ought to be weighted accordingly. We should be indifferent between putting two individuals of a given species in the ice bath and putting one individual of a species that is very similar to the first but whose experiences are twice as large.

How these implications are revisionary

When people adopt a hedonic theory of welfare but don’t allow for the possibility that some conscious creatures can have larger experiences than others—more precisely, when they accept (i), (ii), and (iii’) instead of (i*), (ii), and (iii*)—they have to attribute any differences in the creatures’ capacities for welfare to differences in their capacities for hedonic intensity. A common pattern is to find that species seem to have surprisingly similar capacities for hedonic intensity, and to conclude from this that they probably have similar capacities for welfare.

Bob Fischer and his team at Rethink Priorities offer an especially clear case of this pattern. Their moral weight project sequence (the first half of its entries came earlier and were written by Jason Schukraft) assumes that hedonism is true, and opens with the claim that “According to most plausible views, differences in capacity for welfare and moral status are determined by some subset of differences in things like: intensity of valenced experiences, self-awareness, general intelligence, autonomy, long-term planning, communicative ability, affective complexity, self-governance, abstract thought, creativity, sociability, and normative evaluation.” The entries then explore only differences in welfare capacity attributable to differences in hedonic intensity (where the discussion seems clearly not to be about size), the subjective experience of time (1, 2, 3) (perhaps related to the number of experiences a given brain produces per unit time), and the number of distinct experiences associated with a single neurological system at once (1, 2).[7]

Regarding hedonic intensity, their approach is essentially to make a list of ways in which humans can feel good or bad, at least somewhat related to the list of capacities above, and then to look into how many items on the list a given nonhuman species seems to check off. Fischer argued recently on the 80,000 Hours podcast that even very small creatures check off a lot of them. Apparently fruit flies exhibit what look like symptoms of depression, for instance. If we humans have greater capacities for (positive and negative) welfare than fruit flies only insofar as, on top of humans’ and fruit flies’ shared capacities for basic feelings like hunger, humans can also generate more cognitively sophisticated flavors of good and bad feeling, then the evidence on fruit flies and depression suggests that humans’ capacities for welfare are not much greater after all.

You can predict my response. Even if humans and flies had precisely the same qualitative kinds of sensory and cognitive experiences, and could experience everything just as intensely, we might differ in our capacities for welfare because our experiences are of different sizes. I know what it’s like to feel a pit of depression or a burst of elation “from head to toe”. Maybe a fly does too, but there’s just less room “from its head to its toe”, phenomenally speaking.

You might suspect that my disagreement with Fischer is just linguistic: that what he means when he says something like I think the experiences of creatures A and B have the same capacities for hedonic intensity is just what I would mean if I said I think the experiences of creatures A and B have the same [size  capacity for hedonic intensity at each point]. But on talking with him extensively about this, I’m quite sure the disagreement is substantive.

The disagreement comes out most clearly when we consider split brain cases. We share the view that the result of a corpus callosotomy is two distinct streams of experiences, one of which only feels one side of the body and the other of which only feels the other side. Indeed, we are both willing to say the result is two people. But Fischer maintains that each person associated with the split brain has roughly the same list of experiential capacities as the person had before the corpus callosotomy (sight, depression, and so on); that the positive or negative hedonic intensity each person can experience is about as great as before; and that therefore the pair of people that exist following a corpus callosotomy have, collectively, about twice the capacity for welfare as the single person had before. He accepts that welfare capacity doesn’t quite double, on the grounds that some cognitively sophisticated kinds of pleasure or pain might require both hemispheres. But he holds that each post-surgery person has, say, 0.99x the welfare capacity as the pre-surgery person, so that, say, putting a body with a split brain in an ice bath is 1.98x as bad from a utilitarian perspective as putting a body with an intact brain in the same ice bath.[8] My guess is that putting the former in the ice bath is, say, 0.99x as bad: the negative welfare inflicted on each person is half as bad due to the simple halving of the experiences, minus perhaps a little bit due to the disappearance of some cognitively sophisticated kinds of discomfort.

Mill’s “higher pleasures”?

Jeremy Bentham’s original conception of utilitarianism was simple: the thing to maximize is pleasure minus pain. John Stuart Mill defended a more complex version of utilitarianism on which some pleasures (like poetry) are “higher” than others (like push-pin), and more valuable as a result, even if they are no more intense.[9]

Mill argued that the higher pleasures were lexicographically superior to the lower ones. That is, he thought that no number of instances of fun playing push-pin could ever improve the world by more than one instance of well appreciated poetry. That’s nuts, but decomposing the welfare of an experience into an aggregation of hedonic intensities in different parts of the phenomenal field might help to explain at least a little bit of Mill’s intuition. Hearing a chord in a pop song might come with a jolt of good feeling more intense than the enjoyment you can get out of a more complex piece of music, but the latter might add more to your welfare, and so be more morally important from a utilitarian perspective, because it occupies a larger share of your attention. Depression might be worse than localized sharp pain for the same reason.

Intuitions for superlinearity vs. sublinearity

To recap, my view is that (a) there is a natural sense in which some experiences are larger than others, and (b) that hedonic impacts across larger experiences are more morally important than equally intense hedonic impacts across smaller experiences in a way analogous to the way hedonic impacts across larger populations are more morally important than equally intense hedonic impacts across smaller populations.

My main source of intuition for this is the one I’ve outlined above: reflecting on cases of split brain patients, anesthetized painful limbs, what happens when I close one eye, and so on. This line of thought is centered on experience itself. But another source of intuition that pushes me toward the view above, weaker than the first source but still compelling to me, comes directly from reflecting on how it seems reasonable to expect morally relevant experience to scale with the size or complexity of a neurological system.

On Fischer’s “size-free” account, splitting a brain while it is undergoing some overall pleasurable or painful experience seems to create a quantity of value or disvalue from thin air. Whatever physical property it is that makes a living brain special, so that it can produce value or disvalue as a rock or dead brain can’t, it sure seems like the brain has more of that property (or has the property to a greater degree) the more fully it constitutes a well-functioning, well-behaving, well-integrated whole system. Cutting it in two breaks some functions, behaviors, and information processes; partially disintegrates it; and generally brings it one step closer to being a valueless pile of neurons in a box.

The view that expanding a mind from one hemisphere to two (if we could undo a corpus callosotomy) would increase its welfare capacity by much less than 100%—indeed, only by something like 1%—is an extreme case of a more general view: that

  • welfare capacity, “moral weight”, and other conceptual variables along these lines

tend to grow sublinearly in

  • neuron count and other variables along those lines.

A “size-free” account has led Fischer and many others to the more general view, in the context of interspecies comparisons. See e.g. Tomasik (2019) (discussed more in the last section below).

At face value, putting aside any other reasons we might have to believe in this sublinearity, it is suspicious to me for the same reason as “value doubling by snip” is suspicious to me in the split brain case. If taking my brain apart and recomposing it into the brains of several smaller animals multiplies the capacity for welfare or moral weight across all the matter that used to constitute my brain, there is a funny non-monotonicity. The first few of these decompositions and recompositions supposedly increase aggregate moral weight, but take a step too far and the unstructured goop at the end—the pile of neurons on the ground—has no welfare capacity at all. Even if like Tomasik (2014) we entertain some panpsychist view that says that individual atoms have some tiny glimmerings of experience or welfare capacity or moral weight, I think few panpsychists would deny that they are more, not less, than the sum of their parts.

So it seems more straightforward to me to suppose that moral weight tends to scale superlinearly with inputs like neuron count. I think that most people have this intuition, judging from how much less most people seem to care about tiny animals like insects than about larger animals like birds. When trying to explore the basis for this intuition, a framework that allows moral weight to vary only in line with listed “capacities” or hedonic intensity is prone to concluding that there is no basis for the intuition, and that we should give smaller animals much more concern relative to larger animals than our intuitions tell us to. If we think such frameworks have been neglecting a morally important dimension of experience—especially one that, like size, is relatively easily to grasp on some common-sense level (“how could it really be having a whole big world of feeling when it could fit under my toenail?”) even if it hasn’t been very well articulated—then maybe we should return to something closer to the common sense conclusion.

On the other hand, I grant that people’s intuitions on this question could just be driven by a bias in favor of giving more weight to minds similar to their own. Supporting the conclusion that this is the main driver of the extreme difference in common-sense concern for birds over insects, rather than any truth-tracking intutions about superlinearity, note that another natural implication of superlinearity is that even slightly “bigger” artificial minds could have much greater welfare capacities than humans, with the differences coming both from raw size and from any enhancements in capacity for hedonic intensity. I expect that people will generally not find this intuitive. But I guess we’ll see what happens.

(Even more speculative) epistemic implications

Without getting into the weeds of anthropics, it’s relatively uncontroversial to say that, in some sense or another, a helpful way to reason about your place in the world is to imagine that the “data generating process” that gave rise to your experience involved placing you at random among some set of (perhaps all) possible experiences. Discussions about this sort of thing seem to assume that you were equally likely to turn out to be any of the experiences in the eligible set.[10] But maybe it’s more like a dart is thrown at a big board including all the phenomenal fields, so that you’re more likely to land on a big one than a small one. If so, understanding this can change your conclusions about some important questions.

For instance, you might have the thought that if all the world’s countless insects and tiny fish and so on were conscious, it would be really suspicious that you find yourself to be a human (or a human experience, if you like), at the top of the heap. You might then infer from this that most animals are probably not conscious, even if you would have thought they were conscious just from looking at the neuroscience and so on.[11]

Putting aside whether this sort of reasoning makes sense in the first place, hopefully it’s clear how allowing for the possibility that species have different sized experiences, with larger ones more likely a priori to be you, can be relevant here. Maybe the right update to draw from the fact that you’re “suspiciously not an ant”, if any, isn’t that ants aren’t conscious but just that their experiences are very small.

Related existing ideas

I sometimes hear people, talking about trading off the concerns of different species from a utilitarian perspective, refer separately to (a) the hedonic intensity of the experiences some species has and (b) the species’s “moral weight”. But there are many reasons why one species might have more moral weight than another, not just differing experience size. For example:

  • If experiences are discrete in time, one species might have a higher “frame rate” than another, cranking out more experiences per unit time. (This is discussed e.g. in Rethink Priorities’s moral weights series, in the posts on the subjective experience of time, cited above.)
  • Lee (2023) argues that different species might have different “degrees of consciousness” in one sense or another. One way this might work is that some species at its most alert might lie somewhere on the spectrum we travel every night as we’re drifting off to sleep. If so, this might mean that its experiences come with less positive or negative welfare than ours because of how they score on a dimension other than hedonic intensity or size.

So it’s not clear just from a utilitarian discussion of “moral weight”, as distinct from hedonic intensity, that people are incorporating the idea that some creatures have larger experiences than others in a way closely analogous to how the visual field seems to get larger when we open a lone closed eye.

Across Lee’s (2023) discussion of ways in which consciousness might come in degrees, I believe the way that most closely resembles what I mean by size is his discussion of “atomism”. He defines atomism as the view that one “total experience” (what I have been calling an experience) is composed of many “atomic experiences”, which are indivisible, and says

It's natural for atomists to hold that if x’s total experience is composed from a greater number of atomic experiences than y’s total experience, then x is more conscious than y.

Perhaps this essentially encompasses the idea I’m expressing in this post--I find it a little hard to tell from his list of illustrations of how atomism might work. In any event, one difference is that the point I mean to make doesn’t rely on there being indivisible parts of experience. I think it’s possible to understand a notion of experience size even if experience is truly continuous.

In the 2006 paper “Quantity of experience: brain-duplication and degrees of consciousness”, Bostrom considers a hypothetical process of duplicating a brain or digital mind. He argues that between the beginning of the process, when there would be only one experience, and the end, when there would be two, there is probably a time when there would be, say, one and a half. He discusses how we might make sense of the idea of there being a non-integer number of experiences on a computationalist theory of consciousness. His only statements about what this might mean phenomenologically, however, are negative:

  1. He is not imagining a scenario in which the “half-experience” is half-faded, with the reds looking like pinks and so on.
  2. “Nothing changes, except the quantity of experience. The difference in what experience there is, is of the same kind as the difference between a case where only one brain is having an experience and one in which two identical brains are having that same experience.” (pp. 197-8)

So he doesn’t seem to be referring to the possibility that a single experience A can count for 1.5x as much as a qualitatively different experience B, where the qualitative difference consists of a larger field of vision, bodily sensation, and so on.

Brian Tomasik’s 2019 post “Is Brain Size Morally Relevant?” seems to take a step in the direction of the idea of experience size:

If we attribute moral importance to a collection of various cognitive processes, then if we figuratively squint at the situation, we can see that in a bigger animal, there's some sense in which the cognitive processes ‘happen more’ than in smaller animals, even if all that’s being added are additional sensory detail, muscle-fiber contractions, etc., rather than qualitatively new abilities.

But his explanation of what he has in mind by “happening more” doesn’t seem to capture the idea I have in mind here (unless it’s buried in the “etc.”); adding “sensory detail” sounds like a generalization of laser eye surgery increasing the sharpness of one’s vision, not a generalization of surgery on a blind eye widening one’s visual field. I believe because of this, Tomasik goes on to argue that this “happening more” is probably not very morally relevant, on the grounds that in some sense “signal strengths are relative, not absolute”:

It may be that the amount of neural tissue that responds when I stub my toe is more than the amount that responds when a fruit fly is shocked to death. (I don't know if this is true, but I have almost a million times more neurons than a fruit fly.) However, the toe stubbing is basically trivial to me, but to the fruit fly, the pain that it endures before death comes is the most overwhelming thing in the world.

These ideas are all slippery enough that it’s difficult to know whether the reader and the author have the same notion in mind, but my impression is that the idea of “signal strengths being relative, not absolute” would correspond to the idea that, for someone with only one seeing eye, each object in her field of vision in some important sense appears larger than it appears after the surgery in which her other eye is restored. As noted above, I think that is mistaken. I likewise think it is less bad to impose “the most overwhelming thing in the world” on a phenomenal field with less to overwhelm.

Thanks to Bob Fischer, Brad Saad, Patrick Butlin, Rob Long, Mattia Cecchinato, Riley Harris, Arvo Muñoz Morán, Teru Thomas, Adam Bales, and Tomi Francis for feedback on these ideas and how to present them. 

  1. ^

    Rob Long tells me that philosophers of mind sometimes use the term experience this way, but often enough use it to refer either to what I would call a quale, or to a bundle of qualia that form part of an experience (e.g. “the experience of seeing red” or “...of seeing a painting”), that it would be clearer to use the term total experience here. I will stick with experience because it is shorter and I will be referring to this object often. What I am calling an experience is also sometimes called a person-stage, especially in population ethics, but since I’ll spend a lot of this document discussing animals, I think it could be confusing and perhaps incorrect to incorporate the word person.

  2. ^

    I was on the fence about using the term valence instead—I think most people use the two terms to refer either to the same thing, or to something close enough for our purposes—but it will be helpful to be able to use the word “intensity”.

  3. ^

    Not necessarily exactly half. For one thing, the right hemisphere controls speech on its own. But still, about half.

  4. ^

    “Monotonic” in the sense that if you make some part of the field feel better/worse, the welfare of the overall experience rises/falls.

  5. ^

    As long as we think the welfare of someone with an intact brain falls twice as much when both arms are submerged as when one is.  

  6. ^

    Again, as long as the amputee’s brain hasn’t had time to rewire in any way.

  7. ^

    I believe the closest the sequence comes to considering the possibility of differences in experience size is this post and the associated report, which both state that there are reasons against believing that “additional neurons result in ‘more consciousness’ or ‘more valenced consciousness’”. But, as the report puts it, the author feels that on a utilitarian view “the idea that neuron counts will contribute to moral weight requires an assumption that neuron counts influence the intensity of conscious experiences of positive and negative states” (emphasis added); and both pieces go on to rebut only the view that additional neurons systematically result in more valenced consciousness.

  8. ^

    The example we discussed was burning alive, but that seems unnecessarily gruesome to bring up. I’m taking the liberty of assuming that his intuitions in the burning and ice bath cases are analogous. We both accepted (at least for the sake of argument) the premise that, when the brain is intact, the feeling of burning throughout the body is about twice as bad as the feeling of burning on only half the body; and in any event, as noted above, this sort of simple adding up seems less plausible the more extreme the pains are.

  9. ^
  10. ^

    But I wouldn’t be at all surprised if this had been generalized. If someone can point me to where it has, please do!

  11. ^

    Standish (2013) formalizes a thought like this in an application of SSA.  Thanks to Alex Norman introducing me to this thought way back in 2017, and to Rob Long for helping me find the source.

99

1
1

Reactions

1
1

More posts like this

Comments14
Sorted by Click to highlight new comments since:

I’m not nearly as familiar with the literatures on neuroscience, philosophy of mind, or theories of welfare as would be ideal here. I’m writing this up anyway because to my surprise, most of the philosophers (primarily philosophers of mind) with whom I discussed the idea seemed to think it had a reasonable chance of containing something worthwhile and novel

As one of the philosophers in question, I will now say there's a very high chance this contains something worthwhile! And even if it's not entirely novel (I'm not sure), I'm having trouble finding any papers that are obviously about this topic / concept, so it's still very worth laying out.

And another literature pointer: Integrated Information Theory (IIT) specifies an "amount" of consciousness that a given system has. Adam Pautz criticizes IIT's notion of "amount" as being ambiguous and potentially incoherent. Interestingly, Pautz's list of potential ways in which experiences can be degreed does not (as far as I can tell) contain anything corresponding to your "size" notion.

Thank you!

And thanks for the IIT / Pautz reference, that does seem relevant. Especially to my comment on the "superlinearity" intuition that experience should probably be lost, or at least not gained, as the brain is "disintegrated" via corpus callosotomy... let me know (you or anyone else reading this) if you know whether IIT, or some reasonable precisification of it, says that the "amount" of experience associated with two split brain hemispheres is more or less than with an intact brain.

Very nice post.

Do you think that size and intensity are reducible to a common factor? Somewhat metaphorically, one could say that, ultimately, there are only atoms of pleasantness and unpleasantness, which may be more or less concentrated in phenomenal space. When the atoms are concentrated, we call it ‘intensity’; when they are dispersed, we call it ‘size’. But when all is said and done, the value of a state of affairs is entirely determined by its net hedonic “quantity” (i.e., the number of pleasantness atoms minus the number of unpleasantness atoms).

Thanks! That's a really interesting thought. I hadn't thought of that possibility--I've been working on the assumption that they're not reducible--but now that you mention it, I don't have very strong intuitions about whether it seems more or less likely than there being two dimensions "at bottom".

One intuition against is that it seems a bit weirdly discrete to suppose that a "hedonic atom" can just be +1, 0, or -1. But I guess there's some discreteness at bottom with literal atoms (or perhaps a better analogy would be electrical charge) as well...

While I was at RP, we wrote about a similar hypothesis here.

This excerpt is the one I'd personally highlight as reason for skepticism:

Abstracting away from fruit fly brains, it’s likely that some functions required for consciousness or valence—or realized along the way to generate conscious valence—are fairly high-order, top-down, highly integrative, bottlenecking, or approximately unitary, and some of these are very unlikely to be realized thousands of times in any given brain. Some candidate functions are selective attention,[11] a model of attention,[12] various executive functionsoptimism and pessimism bias, and (non-reflexive) appetitive and avoidance behaviors. Some kinds of valenced experiences, like empathic pains and social pains from rejection, exclusion, or loss, depend on high-order representations of stimuli, and these representations seem likely to be accessible or relatively few in number at a time, so we expect the same to hold for the negative valence that depends on them. Physical pain and even negative valence generally may also turn out to depend on high-order representations, and there’s some evidence they depend on brain regions similar to those on which empathic pains and social pains depend (Singer et al., 2004Eisenberger, 2015). On the other hand, if some kinds of valenced experiences occur simultaneously in huge numbers in the human brain, but social pains don’t, then, unless these many valenced experiences have tiny average value relative to social pains, they would morally dominate the individual’s social pains in aggregate, which would at least be morally counterintuitive, although possibly an inevitable conclusion of Conscious Subsystems.

And I expanded a bit more here.

I cited your post (at the end of the 2nd paragraph of "How these implications are revisionary") as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.

Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?

I think some of the same arguments in our post, including my quoted excerpt, apply if you instead think of counting multiple valenced (pleasurable, unpleasant) components (or "sub-experiences") of one experience. I had thought of having more valenced components like having a visual field with more detail, but that didn't make it into publication.

Sensations are (often) "location-specific". Your visual field, for example, has many different sensations simultaneously, organized spatially.

To add to what I already wrote, I think the case for there being many many accessible valenced components simultanously is weak:

  1. I don’t think there's any scientific evidence for it.
  2. It would be resource-costly to not use the same structures that generate valence in a location-independent way. We don’t need to rerepresent location information already captured by the sensory components.
  3. There is some evidence that we do use these structures in location-independent ways, because the same structures are involved in physical pains, empathic pains (without painful sensation) and social pains, which can involve totally different mapped locations and maybe no location mapping at all.

If this is right, then I don't see "experience size" varying much hedonically across animals.

(If you were instead thinking of one valenced component associated with many non-valenced sensory (or otherwise experiential) components, then I doubt that this would matter more on hedonism. There isn't more pleasure or suffering or whatever just because there are more inputs.)

Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it--apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.

But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly "integrative" field of hedonic intensities, just as I don't doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.

Thanks for the second comment though! It's interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I'm still mostly left thinking
- Re 1, we don't need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that's somehow an illusion, it's the illusion that needs a lot of scientific evidence to debunk.
- Re 2, it's not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn't help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn't seem like such a stretch to me to say that forming the feeling "my hand is on fire and it in particular hurts" shapes our motivations in the right direction more effectively than forming the feeling "my hand is on fire and I've just started feeling bad overall for some reason", and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is "that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field" (emphasis added). I put in the "incorporates" because I don't mean to take a stand on whether there are also things that contribute to welfare that don't correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some "location-dependent" pains; and if so, I would think that these can scale with "size".

Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it--apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.

There's a good chance I edited that in, but I don't remember for sure.

Re 1, we don't need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that's somehow an illusion, it's the illusion that needs a lot of scientific evidence to debunk.

I don't think this is an illusion. However, my understanding of the literature is that pain has 3 components: sensory, affective (unpleasantness) and motivational (aversive desire, motivational salience, how it pulls attention). The sensory component is location-specific and like a field. The affective component seems not like a field, imo, but this is not settled, AFAIK. The motivational component is (in part) the pull of your attention to the motivationally salient parts of your sensory field. It selects and amplifies signals from your sensory field.

it doesn't seem like such a stretch to me to say that forming the feeling "my hand is on fire and it in particular hurts" shapes our motivations in the right direction more effectively than forming the feeling "my hand is on fire and I've just started feeling bad overall for some reason", and that this is worth whatever costs come with producing a field of valences.

I think the mechanism of motivational salience could already account for this. You don't need a field of valences, just for your attention to be pulled to the right parts of your sensory field.

Executive summary: The author argues that experiences can vary in "size" in addition to hedonic intensity, and that this size dimension should be incorporated into hedonic theories of welfare and interspecies welfare comparisons.

Key points:

  1. Experiences can vary in "size" (e.g. visual field, bodily sensations), analogous to how populations can vary in size.
  2. Hedonic theories of welfare should consider both intensity and size when aggregating welfare across an experience.
  3. This view implies that creatures with larger experiences (e.g. humans vs insects) may have greater capacity for welfare, even if hedonic intensities are similar.
  4. Considering experience size may resolve some counterintuitive implications of other approaches to interspecies welfare comparisons.
  5. This perspective could impact anthropic reasoning and views on consciousness in different species.
  6. The author acknowledges this is a novel and speculative idea that requires further development and scrutiny.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Here's another related hypothesis I'm more sympathetic to, copying from this comment:

The only measures of subjective welfare that seem to me like they could ground interpersonal comparisons are based on attention (and alertness), e.g. how hard attention is pulled towards something important (motivational salience) or "how much" attention is used. I could imagine the "size" of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.

But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.

Thanks for noting this possibility--I think it's the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that's analogous to our field of vision and one being's can be bigger than another's, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than "the whole body except for one arm", then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.

Something like this seems like a reasonable possibility to me as well. It still doesn't seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don't think I'd be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.

Even if "scope of attention" is the thing that really matters in the way I'm proposing "size" does, though, I think most of what I'm suggesting in this post can be maintained, since presumably "scope" can't be bigger than "size", and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to "scope".

At least in principle, different species may all be conscious, and all have the same range of capacities for hedonic intensity, but have very differently sized experiences. If so, they ought to be weighted accordingly. We should be indifferent between putting two individuals of a given species in the ice bath and putting one individual of a species that is very similar to the first but whose experiences are twice as large.

(Trigger warning: scenario involving non-hearing humans)
-If I think about a fish vs a fly, this makes some sense.
-If I think about a deaf person vs a hearing person, this starts to make less sense -empirically, I'd wager that there's no difference.
-If I think about a deafblind person vs a hearing-and-sighted person, then my intuition is opposite: I actually care about the deafblind person slightly more, because their tactile phenomenal space has much higher definition than the one of the h.a.s person.

All else being equal, the only thing that matters is the aggregated intensity, no matter the size.

Expanding on this, and less on-topic:
-I've met a lot of people who had preferences over their size of experience (typically, deaf people who want to stay deaf, hearing people who wanted to be deaf, etc)
-Humans with a restricted field of experience seem to experience the rest more intensely. This intensity seems to matter to me.
-I also think that someone who is human-like except with respect to additional senses does not necessarily merit more moral consideration -only if such senses lead them to suffer, but in terms of potential hapiness, it does not move me. 
-I also feel that people with less modalities and a preference over them should be included in an inclusive society, not forced to get the "missing" modalities -much like I'm not interested, at the moment, in additional modalities -such as feeling sexually attracted by animals (it is, after all, something I truly never felt).

I'm confused about how this fares under your perspective, and maybe your answer could help me get back the main distinctions you were trying to do in this article?

Please note that I'm not accusing you of discriminating over modal fields among humans, I'm genuinely curious about the implications of your view. I already wrote a post on something related (my views might have changed on this) and I understand that we disagree, but I'm not sure.

Thanks for sharing this. (Thank you very much as well for letting me start exploring a tricky idea like this without assuming this is all just an excuse for discriminating against those with disabilities!) I definitely agree that a risk of trying to account for differences in "experience size", even if the consideration is warranted, is that it could lead us to quickly dismiss experiences different from our own as smaller even if they aren't.

I am no expert on deafness or most of the other topics relevant here, but my understanding is that often, if someone loses a sensory faculty or body part but doesn't suffer damage to the relevant part of the brain, the brain in some sense rewires to give more attention (i.e., I would guess, more hedonic intensity and/or more "size") to the remaining sensory faculties. This is why, when bringing up the case of an amputee, I only consider the case of someone whose brain has not had time to make this adjustment. I think it could totally be the case that deaf people, at least post-adjustment (or throughout life, if they have been deaf from birth), have such richer experiences on other dimensions that their welfare capacities tend to be greater overall than non-deaf people.

Curated and popular this week
Relevant opportunities