Edit: To clarify, when I say "accept Pascal's Wager" I mean accepting the idea that way to do the most (expected) good is to prevent as many people as possible from going to hell, and cause as many as possible to go to heaven, regardless of how likely it is that heaven/hell exists (as long as it's non-zero).
I am a utilitarian and I struggle to see why I shouldn't accept Pascal's Wager. I'm honestly surprised there isn't much discussion about it in this community considering it theoretically presents the most effective way to be altruistic.
I have heard the argument that there could be a god that reverses the positions of heaven and hell and therefore the probabilities cancel out, but this doesn't convince me. It seems quite clear that the probability of a god that matches the god of existing religions is far more likely than a god that is the opposite, therefore they don't cancel out because the expected utilities aren't equal.
I've also heard the argument that we should reject all infinite utilities – for now it seems to me that Pascal's Wager is the only example where the probabilities don't cancel out, so I don't have any paradoxes or inconsistencies, but this is probably quite a fragile position that could be changed. I also don't know how to go about rejecting infinite utilities if it turns out I have to.
I would obviously love to hear any other arguments.
Thanks!
Then it becomes a choice of accepting the VNM axioms or proposition 3 above.
Like I said, I agree that we should reject 3, but the reason for rejecting 3 is not because it is based on intuition (or based on a non-fundamental intuition). The reason is because it's a less plausible intuition relative to others. For example, one of the VNM axioms is transitivity: if A is preferable to B, and B is preferable to C, then A is preferable to C.
That's just much more plausible than the Yitz's suggestion that we shouldn't be "vulnerable to adversarial attacks" or whatever.
It's also worth noting that your justification for accepting expected value theory is not based on the VNM axioms, since you know nothing about them! Your justification is based on a) your own intuition that it seems correct and b) the testimony of the smart people you've encountered who say it's a good decision theory.