Views my own. I also have a Substack.
Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)
It sounds like you reject this kind of thinking:
cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
I don't think that's unreasonable. Personally, I strongly have the intuition expressed in that quote, though definitely not certain that I will endorse it on reflection.
Wouldn't the better response be to find things we aren't clueless about
The background assumption in this post is that there are no such interventions.
> We start from a place of cluelessness about the effects of our actions on aggregate, cosmos-wide value. Our uncertainty is so deep that we can’t even say whether we expect one action to be better than, worse than, or just as good as another, in terms of its effects on aggregate utility. (See Section 2 of the paper and resources here for arguments as to why we ought to regard ourselves as such.)
Thanks, Ben!
It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)
But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’s the probability that the F of the next X is less than ½?” I have no clue what X and F mean. In particular, I have no idea if F is in “natural” units that would compel me to put a uniform prior over F-values. Why not a uniform prior over F^2 or F^-100? So it does seem sensible to have maximally imprecise beliefs here, and to say it’s indeterminate whether we should take bets like yours.
Yes, it feels bad not to strictly prefer a bet which pays 10^10 if F < ½. But adopting a precise prior would commit me to turning down other bets that look extremely good on other arbitrarily-chosen priors, which also feels bad.
Some reasons why animal welfare work seems better:
Thanks for this! IMO thinking about what it even means to do good under extreme uncertainty is still underrated.
I don’t see how this post addresses the concern about cluelessness, though.
My problem with the construction analogy is: Our situation is more like, whenever we place a brick we might also be knocking bricks out of other parts of the house. Or placing them in ways that preclude good building later. So we don’t know if we’re actually contributing to the construction of the house on net.
On your takeaway at the bottom, it seems to be: “if someone doing A is a necessary condition for a particular good outcome X, that’s a reason for you to do A”. Granted. But the whole problem is that I don’t know how to weigh this reason against the reasons favoring me doing not-A. Why do you think we ought to privilege the particular reason that you point to?
We at CLR are now using a different definition of s-risks.
New definition:
S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.
Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.
I found it surprising that you wrote: …
Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.
+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.
Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.
We want to pick a policy which, in some sense, has low regret with respect to the Bayes-optimal policy under the true model. If we regard our simpler model as a random draw from a space of possible simplified models that we could’ve written down, then we can ask about the frequentist properties of the regret incurred by different decision rules applied to the simple models. And it may be that non-optimizing decision rules like RDM have a favorable bias-variance tradeoff, because they don’t overfit to the oversimplified model. Basically they help mitigate a certain kind of optimizer’s curse.
In principle the proposal in that post is supposed to encompass a larger set of bracketing-ish things than the proposal in this post, e.g., bracketing out reasons that are qualitatively weaker in some sense. But the latter kind of thing isn't properly worked out.