Researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.
If I understand correctly, you’re arguing that we either need to:
I think this is a false dichotomy,[1] even for those who are very confident in impartial consequentialism and risk-neutrality (as I am!). If (as suggested by titotal’s comment) you worry that precise estimates of net welfare conditional on different actions are themselves vibes-based, you have option 3: Suspend judgment on the consequences of what we do for net welfare across the cosmos, and instead make decisions for reasons other than “my [explicit or implicit] estimate of the effects of my action on net welfare says to do X.” (Coherence theorems don’t rule this out.)
What might those other reasons be? A big one is moral uncertainty: If you truly think impartial consequentialism doesn’t give you compelling reasons either way, because our estimates of net welfare are hopelessly arbitrary, it seems better to follow the verdicts of other moral views you put some weight on. Another alternative is to reflect more on what your reasons for action are exactly, if not "maximize EV w.r.t. vibes-based estimates." You can ask yourself, what does it mean to make the world a better place impartially, under deep uncertainty? If you’ve only looked at altruistic prioritization from the perspective of options 1 or 2, and didn’t realize 3 was on the table, I find it pretty plausible that (as a kind of bedrock meta-normative principle) you ought to clarify the implications of option 3. Maybe you can find non-vibes-based decision procedures for impartial consequentialists. ETA: Ch. 5 of Bradley (2012) is an example of this kind of research, not to say I necessarily endorse his conclusions.
(Just to be clear, I totally agree with your claim that we shouldn’t dismiss shrimp welfare — I don’t think we’re clueless about that, though the tradeoffs with other animal causes might well be difficult.)
As nicely discussed in this comment, the key ideas of UDT and LDT seem to have been predated by, respectively, "resolute choice" and Spohn's variant of CDT. (It's not entirely clear to me how UDT or LDT are formally specified, though, and in my experience people seem to equivocate between different senses of "UDT".)
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
The concern motivating the use of imprecise probabilities is that you don't always have a unique prior you're justified in using to compare the plausibility of these distributions. In some cases you'll find that any choice of unique prior, or unique higher-order distribution for aggregating priors, involves an arbitrary choice. (E.g., arbitrary weights assigned to conflicting intuitions about plausibility.)
It's becoming increasingly apparent to me how strong an objection to longtermist interventions this comment is. I'd be very keen to see more engagement with this model.
My own current take: I hold out some hope that our ability to forecast long-term effects, at least under some contingencies within our lifetimes, will be not-terrible enough. And I'm more sympathetic to straightforward EV maximization than you are. But the probability of systematically having a positive long-term impact by choosing any given A over B seems much smaller than longtermists act as if is the case — in particular, it does seem to be in Pascal's mugging territory.
My understanding is that:
I'm not sure I understand this reasoning. If our interpretation of the empirical evidence depends on whether we accept different philosophical hypotheses, it seems like the results should reflect our uncertainty over those hypotheses. What would it mean for claims about weights on potential conscious experiences to be driven purely by empirical evidence, if questions about consciousness are inherently philosophical?