HC

Hayley Clatterbuck

634 karmaJoined

Comments
11

I agree that the plausibility of some DMRA decision theory will depend on how we actually formalize it (something I don't do here but which Laura Duffy did some of here). Thanks for the suggestion.

Hi Richard,

That is indeed a very difficult objection for the "being an actual cause is always valuable" view. We could amend that principle in various ways. One is agent-neutral: it is valuable that someone makes a difference (rather than the world just turning out well), but it's not valuable that I make a difference. One adds conditions to actual causation; you get credit only if you raise the probability of the outcome? Do not lower the probability of the outcome (in which case it's unclear whether you'd be an actual cause at all).

Things get tricky here with the metaphysics of causation and how they interact with agency-based ethical principles. There's stuff here I'm aware I haven't quite grasped!

Thank you, Michael! 

To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I'm more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there's any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson's latest that touches on this point. 

You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It's worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don't have a favorite formal model of ambiguity aversion, so I'm all ears if you do!

Hi David,

Thanks for the comment. I agree that Wilkinson makes a lot of other (really persuasive) points against drawing some threshold of probability. As you point out, one reason is that the normative principle (Minimal Tradeoffs) seems to be independently justified, regardless of the probabilities involved. If you agree with that, then the arbitrariness point seems secondary. I'm suggesting that the uncertainty that accompanies very low probabilities might mean that applying Minimal Tradeoffs to very low probabilities is a bad idea, and there's some non-arbitrary way to say when that will be. I should also note that one doesn't need to reject Minimal Tradeoffs. You might think that if we did have precise knowledge of the low probabilities (say, in Pascal's wager), then we should trade them off for greater payoffs. 

It's possible that invertebrate sentience is harder to investigate given that their behaviors and nervous systems differ from ours more than those of cows and pigs do. Fortunately, there's been a lot more work on sentience in invertebrates and other less-studied animals over the past few years, and I do think that this work has moved a lot of people toward taking invertebrate sentience seriously. If I'm right about that, then the lack of basic research might be responsible for quite a bit of our uncertainty. 

Hi weeatquince,

This is a great question. As I see it, there are at least 3 approaches to ambiguity that are out there (which are not mutually exclusive).

a. Ambiguity aversion reduces to risk aversion about outcomes. 
You might think uncertainty is bad because leaves open the possibility of bad outcomes. One approach is to consider the range of probabilities consistent with your uncertainty, and then assume the worst/ put more weight on the probabilities that would be worse for EV. For example, Pat thinks the probability of heads could be anywhere from 0 to 1. If it's 0, then she's guaranteed to lose $5 by taking the gamble. If it's 1, then she's guaranteed to win $10. If she's risk averse, she should put more weight on the possibility that it has a Pr(heads) = 0. In the extreme, she should assume that it's Pr(heads) = 0 and maximin. 

b. Ambiguity aversion should lead you to adjust your probabilities
The Bayesian adjustment outlined above says that when your evidence leaves a lot of uncertainty, your posterior should revert to your prior. As you note, this is completely consistent with EV maximization. It's about what you should believe given your evidence, not what you should do. 

c. Ambiguity aversion means you should avoid bets with uncertain probabilities
You might think uncertainty is bad because it's irrational to take bets when you don't know the chances. It's not that you're afraid of the possible bad outcomes within the range of things you're uncertain about. There's something more intrinsically bad about these bets. 

Hi Edo,

There are indeed some problems that arise from adding risk weighting as a function of probabilities. Check out Bottomley and Williamson (2023) for an alternative model that introduces risk as a function of value, as you suggest. We discuss the contrast between REV and WLU a bit more here. I went with REV here in part because it's better established, and we're still figuring out how to work out some of the kinks when applying WLU.

Thanks for your comment, Michael. Our team started working through your super helpful recent post last week! We discuss some of these issues (including the last point you mention) in a document where we summarize some of the philosophical background issues. However, we only mention bounded utility very briefly and don't discuss infinite cases at all. We focus instead on rounding down low probabilities, for two reasons: first, we think that's what people are probably actually doing in practice, and second, it avoids the seeming conflict between bounded utility and theories of value. I'm sure you have answers to that problem, so let us know!

Thank you so much for this comment! How to formulate hierarchicalism - and whether there's a formulation that's plausible - is something our team has been kicking around, and this is very helpful. Indeed, your first suggestion is something we take seriously. For example, suffering in humans feeds into a lot of higher-order cognitive processes; it can lead to despair when reflected upon, pain when remembered, hopeless when projected into the future, etc. Of course, this isn't to say that human suffering matters more in virtue of it being human but in virtue of other properties that correlate with being human.

I agree that we presented a fairly naive hierarchicalism here: take whatever is of value, and then say that it's more important if and because it is possessed by a human. I'll need to think more about whether your second suggestion can be dispatched in the same way as the naive view.

Load more