Edit: To clarify, when I say "accept Pascal's Wager" I mean accepting the idea that way to do the most (expected) good is to prevent as many people as possible from going to hell, and cause as many as possible to go to heaven, regardless of how likely it is that heaven/hell exists (as long as it's non-zero).
I am a utilitarian and I struggle to see why I shouldn't accept Pascal's Wager. I'm honestly surprised there isn't much discussion about it in this community considering it theoretically presents the most effective way to be altruistic.
I have heard the argument that there could be a god that reverses the positions of heaven and hell and therefore the probabilities cancel out, but this doesn't convince me. It seems quite clear that the probability of a god that matches the god of existing religions is far more likely than a god that is the opposite, therefore they don't cancel out because the expected utilities aren't equal.
I've also heard the argument that we should reject all infinite utilities – for now it seems to me that Pascal's Wager is the only example where the probabilities don't cancel out, so I don't have any paradoxes or inconsistencies, but this is probably quite a fragile position that could be changed. I also don't know how to go about rejecting infinite utilities if it turns out I have to.
I would obviously love to hear any other arguments.
Thanks!
I've had a little more chance to flesh out this idea of "universal common sense." I'm now thinking of it as "the wisdom of the best parts of the past, present, and future."
Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.
In the AI doom scenario, I think we should reject the common sense of the denizens of that future on matters pertaining to AI doom, as they weren't wise enough to avoid doom.
In the Mormon scenario, I think that if the future is Mormon, then that suggests Mormonism would probably be a good thing. I generally trust people to steer toward good outcomes over time. Hence, if I believed this, then that would make me take Mormonism much more seriously.
I have a wide confidence interval for this notion of "universal common sense" being useful. Since you seem to be confidently against it, do you have futher objections to it? I appreciate the chance to explore it with a critical lens.