This is a fascinating read!
In the paper you discuss how your approach to infinite utilities violates the continuity axiom of expected utility theory. But in my understanding, the continuity axiom (together with the other VNM axioms) provide the justification for why we should be trying to calculate expectation values in the first place. If we don't believe in those axioms, then we don't care about the VNM theorem, so why should we worry about expected utility at all (hyperreal or not)?
Is it possible to write down an alternative set of plausible axioms under which expected hyperreal utility maximization can be shown to be the unique rational way to make decisions? Is there a hyperreal analogue of the VNM theorem?
This is an interesting analysis!
The two parts that seem most unrealistic to me:
Far-future effects are the most important determinant of what we ought to do
The argument for strong longtermism as I understand it seems structurally identical to Pascal's mugging. (There is some small chance that the value of the future is enormous, therefore reducing extinction risk today has enormous value).
It frustrates me that I can't explain exactly what is wrong with this argument, but I am sceptical of it for the same reason that I wouldn't hand over my wallet to a Pascal mugger.
This was an interesting read, thanks!
And Our World In Data’s chart is understanding the case
Understating..?
This is interesting! But I don't think I fully understand why 'private law' rights are supposed to be more credible than 'wellbeing' rights. Wouldn't humans also have an incentive to disregard these 'private law' rights when it suited them?
For example, if the laws were still created by humans, then if AI systems accumulated massive amounts of wealth, wouldn't there be a huge incentive to simply pass a law to confiscate it?
Thanks Vasco! I have written a reply to both you and Austin in the thread under Austin's comment.
One thing that applies to your reply specifically: I don't see how a market for farmed animal welfare could decrease animal-years, as you suggest it might in the near term (though I may be misunderstanding how the system is supposed to work).
I get that animal welfare improvements often carry a cost, and that imposing an animal welfare improvement on a farmer with no compensation would therefore shift the supply curve, raise prices, and ultimately decrease the number of animals being farmed.
But my understanding of the animal welfare market idea is that these welfare improvements are not imposed, but bought. The person paying for the improvement would now need to pay enough that it is worth the farmer voluntarily implementing that improvement, which presumably would involve covering all of the cost of the improvement and then some. Since this is now an additional source of the income for the farmer, I think you'd expect it to shift the supply curve of animal products in the opposite direction, causing a drop in prices, and an increase in the amount of animal products consumed?
Thank you both for these answers, this is helpful!
It sounds like it is useful to distinguish two possible ways of implementing a welfare market:
I can see how on pure consequentialist grounds the first case would be good, and avoid the problem I was asking about. Although I expect a lot of vegans who have a principled objection to animals being treated as property will object to this, if increasing quantity of farmed animals is explicitly viewed as a positive outcome of the policy. I would certainly have reservations about it.
On the other hand, the second case seems like it does carry the risk I was asking about. We should expect the quantity of animals farmed to increase in a way that might outweigh the gain in welfare per animal (whether or not it does will I think depend on complicated economics things like the slopes of supply and demand curves?)
It's still a psychology question! The people who die in year 1 and make up those increased donations haven't had time to accrue interest, so they're donating money you claim they'd never have donated if they were only 10% per year pledgers, which is a claim about donor psychology, and an unrealistic one at that!
Maybe the two readings you describe can both be correct at the same time, and even complement each other?
Perhaps the point being made is: we find the initially described utopia hard to believe because we are in a situation similar to Omelas, where our pleasures depend on someone else's misery. So when someone tries to have us believe that true utopia is possible, we reject it, because facing up to its possibility would force us to confront our guilt about our current situation.