M

MichaelStJules

10485 karmaJoined

Sequences
2

Human impacts on animals
Welfare and moral weights

Comments
2314

Topic contributions
12

Ya, we probably roughly agree about meta-ethics, too. But I wouldn't say I "understand" consciousness or ethics, except maybe at a high level, because I'm not settled on what I care about and how. The details matter, and can have important implications. I would want to defer to my more informed views.

For example, the evidence in this paper was informative to me, even assuming strong illusionism:

https://www.frontiersin.org/journals/veterinary-science/articles/10.3389/fvets.2022.788289/full

And considerations here also seem important to me:

https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to?commentId=yiqJTLfyPwCcdkTzn

FWIW, I lean towards (strong) illusionism, but I think this still leaves a lot of room for questions about what things (which capacities and states) matter, how and how much. I expect much of this will be largely subjective and have no objective fact of the matter, but it can be better informed by both empirical and philsophical research.

I wonder if the right or most respectful way to create moral patients (of any kind) is to leave many or most of their particular preferences and psychology mostly up to chance, and some to further change. We can eliminate some things, like being overly selfish, sadistic, unhappy, having overly difficult preferences to satisfy, etc., but we shouldn’t decide too much what kind of person any individual will be ahead of time. That seems likely to mean treating them too much as means to ends. Selecting for servitude or submission would go even further in this wrong direction.

We want to give them the chance to self-discover, grow and change as individuals, and the autonomy to choose what kind of people to be. If we plan out their precise psychologies and preferences, we would deny them this opportunity.

Perhaps we can tweak the probability distribution of psychologies and preferences based on society's needs, but this might also treat them too much like means. Then again, economic incentives could also push them in the same directions, anyway, so maybe it's better for them to be happier with the options they'll face anyway.

Well, it might also reduce chicken and egg consumption, because those are also more carbon-intensive than plant-based foods. And then there could also be symbolic effects, e.g. people might come to believe that animal agriculture in general is bad for the environment, not just ruminant farming. It could also support plant-based and cultured food R&D, which could then be good for chickens.

I don't have a strong view either way. I don't think such symbolic effects matter much for the vast majority of people, and I'd guess the price effects push towards increased chicken and egg consumption. How plant-based and cultured food R&D is affected seems super speculative to me, and this doesn't seem like a reliable way to increase it anyway.

Regardless of which way my best guess would go, I wouldn't have pushed for this to happen, because I definitely wouldn't have been confident it was robustly positive for animals, and I'd guess there were more effective and reliable ways to achieve the same upsides, without the potential downsides (or similarly large ones, per unit of resources or work).

Coming back to this, since I've become more sympathetic to (asymmetric) narrow person-affecting views recently, because of this and sympathies to actualism.

5.1. A trilemma for narrow views

Here’s a problem for narrow views. Consider:

Expanded Non-Identity

(1) Amy 1

(2) Bobby 100

(3) Amy 10, Bobby 10

(...)

Only option (2) is permissible

Now we can complete the trilemma for narrow views. If neither of (1) and (3) is permissible in Expanded Non-Identity, it must be that only (2) is permissible. But if only (2) is permissible, then narrow views imply:

Losers Can Dislodge Winners:

Adding some option X to an option set can make it wrong to choose a previously-permissible option Y, even though choosing X is itself wrong in the resulting option set.[10]

That’s because narrow views imply that each of (1) and (2) is permissible in One-Shot Non-Identity. So if only (2) is permissible in Expanded Non-Identity, then adding (3) to our option set has made it wrong to choose (1) even though choosing (3) is itself wrong in Expanded Non-Identity.

That’s a peculiar implication. It’s a deontic version of an old anecdote about the philosopher Sidney Morgenbesser. Here’s how that story goes. Morgenbesser is offered a choice between apple pie and blueberry pie, and he orders the apple. Shortly after, the waiter returns to say that cherry pie is also an option, to which Morgenbesser replies, ‘In that case, I’ll have the blueberry.’

I suspect this is a misleading analogy. In the case of pies, you haven't given any reason why they would change their mind, and it's hard to imagine one to which anyone would be sympathetic (but maybe someone could have reasons, and then it's not my place to judge them!). That could explain its apparent peculiarity. It's just not very psychologically plausible, because people don't think of pies or food in that way in practice.

But we have an argument for why we would change our mind in the expanded non-identity case: we follow the logic of narrow person-affecting views (with those implications), to which we are sympathetic. If the reasons for such a narrow person-affecting view seem to someone to be good, then the implications shouldn't seem peculiar.

 

The pattern is even stranger in our deontic case.

I'd say it's less strange, because we already have a more psychologically plausible explanation, i.e. person-affecting intuitions. Why do you think it's stranger?

 

Imagine instead that the waiter is offering Morgenbesser the options in Expanded Non-Identity.[11] Initially the choice is between (1) and (2), and Morgenbesser permissibly opts for (1). Then the waiter returns to say that (3) is also an option, to which Morgenbesser replies, ‘In that case, I’m morally required to switch to (2).’ The upshot is that the waiter can force Morgenbesser’s hand by adding options that are wrong to choose in the resulting option set. And turning the case around, the waiter could expand Morgenbesser’s menu of permissible options by taking wrong options off the table. That seems implausible.

I think this is too quick, and, from my perspective, i.e. with my intuitions, a mistake.

  1. I don't find the implications implausible or very counterintuitive (perhaps for the reasons below).
  2. A different way of framing this is that the waiter is revealing information about which options are permissible. The waiter has private information, i.e. whether or not a given option will be available, which decides which ones are permissible. In general, when someone has private information about your options (or their consequences), they can force you to reevaluate your options and force your hand by revealing the info. The narrow person-affecting response is a special case of that. So, your argument would prove too much: it would say it's implausible to have your hand forced by the revelation of private information, which is obviously not true. (And I think there's no Dutch book or money pump with foreseeable loss here; you just have to be a sophisticated reasoner and anticipate what the waiter will do, and recognize what your actual option set will be.)
  3. Another framing is basically the one by Lukas, or the object version of preferentialism/participant model of Rabinowicz & Österberg, 1996. You're changing the perspectives or normative stances you take, depending on who comes to exist. It's not surprising that you would violate the independence of irrelevant alternatives in certain ways, when you have to shift perspectives like this, and it just follows on specific views.
  4. In general, I think it's somewhat problematic/uncharitable to call something implausible or that it "seems implausible" and end the discussion there, because people vary substantially in what they find implausible, counterintuitive, etc.. When someone does this, I get the impression that they take their arguments to be more universally appealing (or "objective") than they actually are. Unless they make clear they're speaking only for themselves. Maybe "seems" should normally be understood as speaking only for yourself and your own intuitions, but I'd find this less frustrating if it were made explicit.

 

I do wonder if your example suggests that in practice you should often or usually act like you hold a wide view, though. If you're indifferent between (1) Amy at 1 and (2) Bobby at 100 when they are (so far) the only two options, you should anticipate that (3) or similar options might become available, and so opt for (2) in case.

What instances do you have in mind by "strong revealed preference for irrationality"?

Could be bad for animals, causing a shift from cattle and other ruminants to chickens. Chickens are killed in much greater numbers per kg of meat and I expect have worse lives on average.

EDIT: And it could increase the consumption of eggs, fish and shrimp, too.

(Not speaking for this group.)

I’ve never been able to understand how any serious consideration of insect welfare doesn’t immediately lead to the unacceptable conclusion that any cause other than the welfare of demodex mites or nematodes is almost meaningless.

Adding to what others have said already, you could also have moral/normative uncertainty about decision theory and aim to do well across multiple attitudes to risk and uncertainty/ambiguity, and some attitudes will prioritize animals that seem less likely to be conscious more or less than others, some possibly severely discounting invertebrates. You can also be morally uncertain about moral aggregation (by addition in particular, say), and then helping humans might look better on non-aggregative (or only partially aggregative) views.

You can also be morally uncertain about the moral weights of animals in other ways, although I've recently argued against it being very important here, so for me, it's mostly attitudes towards risk and uncertainty/ambiguity and aggregation, and, of course, the particular probabilities and other numbers involved.

I'm personally inclined to focus on arthropods using a decent share of my altruistic budget, but not most of it. I'm fairly concerned about mites, but not specifically demodex mites. I don't care much about nematodes (which are not arthropods, and seem particularly unlikely to matter much to me).

Ya, someone might argue that the average person contributes to economic growth and technological development, and so accelerates and increases x-risk. So, saving lives and increasing incomes could increase x-risk. Some subgroups of people may be exceptions, like EAs/x-risk people or poor people in low-income countries (who are far from the frontier of technological development), but even those could be questionable.

There’s no good reason to think that GiveWell’s top charities are net harmful.

The effects on farmed animals and wild animals could make GiveWell top charities net harmful in the near term. See Comparison between the hedonic utility of human life and poultry living time and Finding bugs in GiveWell's top charities by Vasco Grilo.

My own best guess is that they're net good for wild animals based on my suffering-focused views and the resulting reductions of wild arthropod populations. I also endorse hedging in portfolios of interventions.

Load more