Researcher at the Center on Long-Term Risk. All opinions my own.
I think if you're savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there's more clarity in the future
What do you think about the arguments for cluelessness from imprecision, e.g., here? (I explain more why I think we're clueless even about the things you list, here.)
Thanks for this! For what it's worth, some issues I've found with the "CRIBS" and "EA Epistemic Auditor" reviews for drafts of philosophical blog posts:
But they're somewhat useful for surfacing what kinds of misunderstandings readers might have.
(Sorry, due to lack of time I don't expect I'll reply further. But thank you for the discussion! A quick note:)
from the subjective feeling (in your mind) that their EVs feel very hard to compare
EV is subjective. I'd recommend this post for more on this.
I see arbitrary choices as a reason for further research to decrease their uncertainty
First, it's already very big-if-true if all EA intervention candidates other than "do more research" are incomparable with inaction.
Second, "do more research" is itself an action whose sign seems intractably sensitive to things we're unaware of. I discuss this here.
However, by actual value, you mean a set of possible values
No, I mean just one value.
why would weighted sums of actual masses representing expected masses not be comparable?
Sorry, by "expected" I meant imprecise expectation, since you gave intervals in your initial comment. Imprecise expectations are incomparable for the reasons given in the post — I worry we're talking past each other.
What do you mean by actual mass?
The mass that the object in fact has. :) Sorry, not sure I understand the confusion.
I think expected masses are comparable because possible masses are comparable.
I don't think this follows. I'm interested in your responses to the arguments I give for the framework in this post.
Would your framework suggest the mass of the objects is incomparable
Yes, for the expected mass.
I believe my best guess should be that the mass of one is smaller, equal, or larger than that of the other
Why? (The actual mass must be either smaller, equal, or larger, but I don't see why that should imply that the expected mass is.)
I find this "observation" confusing / misleading, given that Holden defines cluster thinking as aggregating decisions from multiple perspectives. This is very different from aggregating the predictions of multiple models. The evidence of "success" he cites only applies to the latter (where "success" is with respect to Brier scores and such), not the former.
And this is practically relevant: If you aggregate multiple models but then maximize EV under the aggregated model, you don't get the "sandboxing" property Holden claims cluster thinking satisfies. The fanatical/Pascalian model will still dominate the EV calculation.
(ETA: As an aside on sequence thinking / cluster thinking generally, I wish these discussions made it very clear whether we're taking ST/CT as (1) our normative standard for good epistemology / decision-making per se, vs. as (2) different procedures for satisfying a given epistemological / decision-theoretic standard. Cf. "criterion of rightness vs. decision procedure" in ethics. This would be helpful for clarifying what's meant by claims like "cluster thinking is how 'successful' prediction systems operate". I've been assuming (2), here, FWIW.)