Researcher at the Center on Long-Term Risk. All opinions my own.
The stuff on cluelessness feels like it's conceding a little too much to the EA/bayesian frame. It's implying that you should have a model of the entire future in order to make decisions. But what I think you actually want to claim is that it's sensible and even "rational" to make non-model-based decisions (e.g. via heuristics, intuitions, etc).
I'd be interested in hearing more on what exactly you mean by this. Insofar as someone wants to make decisions based on impartially altruistic values, I think cluelessness is their problem, even if they don't make decisions by explicitly optimizing w.r.t. a model of the entire future. If such a person appeals to some heuristics or intuitions as justification for their decisions, then (as argued here) they need to say why those heuristics or intuitions reliably track impact on the impartial good. And the case for that looks pretty dubious to me.
(If you're rejecting the "make decisions based on impartially altruistic values" step, fair enough, though I think we'd do well to be explicit about that.)
My best guess about which of 2 identical objects has a larger mass in expectation will be arbitrary is their mass only differs by 10^-6 kg, and I have no way of assessing this small difference. However, this does not mean the expected mass of the 2 objects is fundamentally incomparable
I worry you're reifying "expectations" as something objective here. The relative actual masses of the objects are clearly comparable. But if you subjectively can't compare them, then they're indeed incomparable "in expectation" in the relevant sense.
However, the same goes for comparisons among the expected mass of seemingly identical objects with a similar mass if I can only assess their mass using my hands, but this does not mean their mass is incomparable.
I don't exactly understand what argument you're making here.
My core argument in the post is: Take any intervention X. We want to weigh up its impact for all sentient beings across the cosmos, where this "weighing up" is aggregation over all hypotheses. Now suppose we want to force ourselves to compare X with inaction, i.e., say either UEV(do X) > UEV(don't do X) or vice versa. We have such an extremely coarse-grained understanding (if any) of these hypotheses[1] that, when we do the weighing-up, whether we say UEV(do X) > UEV(don't do X) or vice versa seems to depend on an arbitrary choice.
Can you say how your argument relates to mine?
Relative to the amount of fine-grained detail necessary to evaluate the hypothesis, when what we value is "well-being of all sentient beings across the cosmos".
In normal situations, an agent can rationally come to a single probability distribution, but, Greaves argues that, in a situation with complex cluelessness, an individual should instead have a set of probability functions that they are “rationally required to remain neutral between.” I’m not entirely sure what this means.
You might be interested in this post I wrote explaining imprecision — hopefully answers "what this means".
Given that the intervals are both derived from a representor P, the interval of EV diffs is {EV_p(A) - EV_p(B) | p in P}. See also here.
Thanks — I've read both but neither seems to answer my objection.