M

MichaelStJules

10530 karmaJoined

Sequences
2

Human impacts on animals
Welfare and moral weights

Comments
2338

Topic contributions
12

Would you consider making retroactive grants? I saw that the LTFF did a few. If you did, how would you evaluate them differently from the usual grants for future work?

I'm personally interested in retroactive grants for cause prioritization research.

I suppose I'm more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there's any decision theory that is both stateable and clear on this point.

Couldn't the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.

How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it's kind of arbitrary? Maybe they can't be cleanly separated?

I'm inclined to say that when we're considering the stakes to decide what credences to use, then that's decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we're not saying it's actually more likely, it's just something we shouldn't round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.

Maybe it's harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I'm also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all. 

(Edited.)

The problem of arbitrariness has been pushed back from having no external standard for our rounding down value to having some arbitrariness about when that external standard applies. Some progress has been made.

It seems like we just moved the same problem to somewhere else? Let S be "that external standard" to which you refer. What external standard do we use to decide when S applies? It's hard to know if this is progress until/unless we can actually define and justify that additional external standard. Maybe we're heading off into a dead end, or it's just external standards all the way down.

Ultimately, if there's a precise number — like the threshold here — that looks arbitrary, eventually, we're going to have to rely on some precise and I'd guess arbitrary-seeming direct intuition about some number.

Second, the epistemic defense does not hold that the normative laws change at some arbitrary threshold, at least when it comes to first-order principles of rational decision.

Doesn't it still mean the normative laws — as epistemology is also normative — change at some arbitrary threshold? Seems like basically the same problem to me, and equally objectionable.

 

Likewise, at a first glance (and I'm neither an expert in decision theory nor epistemology), your other responses to the objections in your epistemic defense seem usable for decision-theoretic rounding down. One of your defenses of epistemic rounding down is stakes-sensitive, but then it doesn't seem so different from risk aversion, ambiguity aversion and their difference-making versions, which are decision-theoretic stances.

In particular

Suppose we adopt Moss’s account on which we are permitted to identify with any of the credences in our interval and that our reasons for picking a particular credence will be extra-evidential (pragmatic, ethical, etc.). In this case, we have strong reasons for accepting a higher credence for the purposes of action.

sounds like an explicit endorsement of motivated reasoning to me. What we believe, i.e. the credences we pick, about what will happen shouldn't depend on ethical considerations, i.e. our (ethical) preferences. If we're talking about picking credences from a set of imprecise credences to use in practice, then this seems to fall well under decision-theoretic procedures, like ambiguity aversion. So, such a procedure seems better justified to me as decision-theoretic.

Similarly, I don't see why this wouldn't be at least as plausible for decision theory:

Suppose you assign a probability of 0 to state s1 for a particular decision. Later, you are faced with a decision with a state s2 that your evidence says has a lower probability than s1 (even though we don’t know what their precise values are). In this context, you might want to un-zero s1 so as to compare the two states.

One response to these objections to rounding down is that similar objections could be raised against treating consciousness, pleasure, unpleasantness and desires sharply if it turns out to be vague whether some systems are capable of them. We wouldn't stop caring about consciousness, pleasure, unpleasantness or desires just because they turn out to be vague.

And one potential "fix" to avoid these objections is to just put a probability distribution over the threshold, and use something like a (non-fanatical) method for normative uncertainty like a moral parliament over the resulting views. Maybe the threshold is distributed uniformly over the interval .

Now, you might say that this is just a probability distribution over views to which the objections apply, so we can still just object to each view separately as before. However, someone could just consider the normative view that is (extensionally) equivalent to a moral parliament over the views across different thresholds. It's one view. If we take the interval to just be , then the view doesn't ignore important outcomes, it doesn't neglect decisions under any threshold, and the normative laws don't change sharply at some arbitrary point.

The specific choice of distribution for the threshold may still seem arbitrary. But this seems like a much weaker objection, because it's much harder to avoid in general, e.g. precise cardinal tradeoffs between pleasures, between displeasures, between desires and between different kinds of interests could be similarly arbitrary.

This view may seem somewhat ad hoc. However, I do think treating vagueness/imprecision like normative uncertainty is independently plausible. At any rate, in case some of the things we care about turn out to be vague but we'll want to keep caring about them anyway, we'll want to have a way to deal with vagueness, and whatever that is could be applied here. Treating vagueness like normative uncertainty is just one possibility, which I happen to like.

DMRA could actually favour helping animals of uncertain sentience over helping humans or animals of more probable sentience, if and because helping humans can backfire badly for other animals in case other animals matter a lot (through the meat eater problem and effects on wild animals), and helping vertebrates can also backfire badly for wild invertebrates in case wild invertebrates matter a lot (especially through population effects through land use and fishing). Helping other animals seems less prone to backfire so much for humans, although it can. And helping farmed shrimp and insects seems less prone to backfire so much (relative to potential benefits) for other animals (vertebrates, invertebrates, farmed and wild)

I suppose you might prefer human-helping interventions with very little impact on animals. Maybe mental health? Or, you might combine human-helping interventions to try to mostly cancel out impacts on animals, like life-saving charities + family planning charities, which may have roughly opposite sign effects on animals. And maybe also hedge with some animal-helping interventions to make up for any remaining downside risk for animals. Their combination could be better than primarily animal-targeted interventions under DMRA, or at least inteventions aimed at helping animals unlikely to matter much.

Maybe chicken welfare reforms still look good enough on their own, though, if chickens are likely enough to matter enough, as I think RP showed in the CURVE sequence.

Another motivation I think worth mentioning is just objecting to fanaticism. As Tarsney showed, respecting stochastic dominance with statistically independent background value can force a total utilitarian to be pretty fanatical, although exactly how fanatical will depend on how wide the distribution of the background value is. Someone could still find that objectionably fanatical, even to the extent of rejecting stochastic dominance as a guide. They could still respect statewise dominance.

That being said, DMRA could also be "fanatical" about the risk of causing net harm, leading to paralysis and never doing anything or always sticking with the "default", so maybe the thing to do is to give less than proportional weight to both net positive impacts and net negative impacts, e.g. a sigmoid function of the difference.

I'm sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don't think reinforcement strength is defining.)

There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:

https://welfarefootprint.org/technical-definitions/

and some more discussion here:

https://welfarefootprint.org/2024/03/12/positive-animal-welfare/

I'm afraid I don't know anywhere else these arguments are fleshed out in more detail than what I shared in my first comment (https://link.springer.com/article/10.1007/s13164-013-0171-2).

I'll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights

I lean towards functionalism and illusionism, but am quite skeptical of computationalism and computational functionalism, and I think it's important to distinguish them. Functionalism is, AFAIK, a fairly popular position among relevant experts, but computationalism much less so.

Under my favoured version of functionalism, the "functions" we should worry about are functional/causal roles with effects on things like attention and (dispositional or augmented hypothetical) externally directed behaviours, like approach, avoidance, beliefs, things we say (and how they are grounded through associations with real world states). These seem much less up to interpretation than computed mathematical "functions" like "0001, 0001 → 0010". However, you can find simple versions of these functional/causal roles in many places if you squint, hence fuzziness.

Functionalism this way is still compatible with digital consciousness.

And I think we can use debunking arguments to support functionalism of some kind, but it could end up being a very fine-grained view, even the kind of view you propose here, with the necessary functional/causal roles at the level of fundamental physics. I doubt we need such fine-grained roles, though, and suspect similar debunking arguments can rule out their necessity. And I think those roles would be digitally simulatable in principle anyway.

It seems unlikely a large share of our AI will be fine-grained simulations of biological brains like this, given its inefficiency and the direction of AI development, but the absolute number could still be large.

Or, we could end up with a version of functonalism where nonphysical properties or nonphysical substances actually play parts in some necessary functional/causal roles. But again, I'm skeptical, and those roles may also be digitally (and purely physically) simulatable.

It seems worth mentioning the possibility that progress can also be bottlenecked by events external to our civilization. Maybe we need to wait for some star to explode for some experiment or for it to reach some state to exploit it. Or maybe we will wait for the universe to cool to do something (like the aestivation hypothesis for aliens). Or maybe we need to wait for an alien civilization to mature or reach us before doing something.

And even if we don’t "wait" for such events, our advancement can be slowed, because we can't take advantage of them sooner or as effectively along with our internal advancement. Cumulatively, they could mean advancement is not lasting and doesn't make it to our end point.

But I suppose there's a substantial probability that none of this makes much difference, so that uniform internal advancement really does bring everything that matters forward roughly uniformly (ex ante), too.

And maybe we miss some important/useful events if we don't advance. For example, the expansion of the universe puts some stars permanently out of reach sooner if we don’t advance.

Another possible endogenous end point that could be advanced is meeting (or being detected by) an alien (or alien AI) civilization earlier and having our civilization destroyed by them earlier as a result.

Or maybe we enter an astronomical suffering or hyperexistential catastrophe due to conflict or stable totalitarianism earlier (internally or due to aliens we encounter earlier) and it lasts longer, until an exogenous end point. So, we replace some good with bad, or otherwise replace some value with worse value.

Load more