Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules
FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there's some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1/2) = P(1-F < 1/2) = P(F < 1/2), so P(F < 1/2) ≤ 1/2, and strictly less iff P(F = 1/2) > 0.
If you can rule out P(F = 1/2) > 0 (say by an additional assumption), or the bet were on F ≤ 1/2 instead of F < 1/2, then the probability would just be 1/2.
I recommend funding GWWC and the Centre for Exploratory Altruism Research’s (CEARCH’s) High Impact Philanthropy Fund (HIPF) due their effects on soil animals, which I think are practically proportional to the increase in agricultural-land-years per $. I estimate HIPF increases agricultural land 9.42 times as cost-effectively as GiveWell’s top charities, which is similar to my estimates for the giving multiplier of GWWC in 2023 and 2024.
A few quick comments:
(Edited to elaborate.)
I think bracketing agents could be moved to bracket out and ignore value of information sometimes and more often than EV-maxers, but it's worth breaking things down further to see when. Imagine we're considering an intervention with:
Then:
a. If the group in 2 is disjoint from the group in 1, then we can bracket out those affected in 1 and decide just on the basis of the expected value of information in 2 (and opportunity costs).
b. If the group in 2 is a subset of the group in 1, then the minimum expected value of information needs to be high enough to overcome the potential expected worst case downsides from the direct effects on the group in 1, for the intervention to beat doing nothing. The VOI can get bracketed away and ignored along with the direct effects in 1.
And there are intermediate cases, with probably intermediate recommendations.
Without continuity (but maybe some weaker assumptions required), I think you get a representation theorem giving lexicographically ordered ordinal sequences of real utilities, i.e. a sequence of expected values, which you compare lexicographically. With an infinitary extension of independence or the sure-thing principle, you get lexicographically ordered ordinal sequences of bounded real utilities, ruling out St Pesterburg-like prospects, and so also ruling out risk neutral expectational utilitarianism.
FWIW, since 2022 (so after SWP and FWI), I count:
One way you could think about the St Petersburg lottery money pump is that the future version of yourself after evaluating the lottery just has different preferences or is a different agent. Now, you might say your preferences should be consistent over time and after evaluations, but why? I think the main reason is to avoid picking dominated outcome distributions, but there could be other ways to do that in practice, e.g. pre-commitments, resolute choice, burning bridges, trades, etc.. You would want to do the same thing for Parfit's hitchhiker. And you would similarly want to constrain the choices of or make trades with other agents with different preferences, if you were handing off the decision-making to them.
I grant that this is pretty weird. But I think it’s weird because of the mathematical property that an infinite function can have where it’s average value (or its expected value) can be greater than any possible value it might have. In light of such a situation, it’s not particularly surprising that each time you discover the outcome of the situation, you’ll be disappointed and want to trade it away. If a view has weird implications because of weird math, that is the fault of the math, not of the view.
I'm not sure I would only blame the math, or that you should really separate the math from the view.
Basically all of the arguments for the finitary independence axiom and finitary sure-thing principle are also arguments for their infinitary versions, and then they imply "bounded" utility functions.[1] You could make exceptions for unbounded prospects and infinities because infinites are weird, but you should also probably accept that you're at least somewhat undermining some of your arguments for fanaticism in the first place, because they won't hold in full generality.
Indeed, I would say fanaticism is less instrumentally rational than bounded utility functions, i.e. more prone to making dominated choices. But there can be genuine tradeoffs between instrumental rationality and other desiderata. I don't see why sometimes in theory making dominated choices is worse than sacrificing other desiderata. Either way, you're losing something.
In my case, I'm willing to sacrifice some instrumental rationality to avoid fanaticism, so I'm sympathetic to some difference-making views.
See Jeffrey Sanford Russell, and Yoaav Isaacs. “Infinite Prospects*.” Philosophy and Phenomenological Research, vol. 103, no. 1, Wiley, July 2020, pp. 178–98, https://doi.org/10.1111/phpr.12704, https://philarchive.org/rec/RUSINP-2
That assumes independence of irrelevant alternatives, transitivity and completeness, but I'd think you can drop completeness and get a similar result, with "multi-utility functions".
Great post, thanks for writing!
I buy that individuals should try to pick "policies" and psychologically commit themselves to them, rather than only evaluate actions one at a time. I think this totally makes sense for seatbelts and helmets. However, I'm not sure it requires evaluating actions collectively at a fundamental normative level rather than practically, especially across individuals. I think we can defend wearing seatbelts and helmets with Nicolausian discounting without supporting longtermism or x-risk work to most individuals, even if the marginal x-risk opportunity were similar to the average or best already funded.
In particular, I know that if I don't wear my seatbelt this time in a car by some logic that is not very circumstance-specific, I could use similar logic in the future to keep talking myself out of wearing a seatbelt, and those risks would accumulate into a larger risk that could be above the discount threshold. So I should stop myself now to minimize that risk. I should consider the effects of my reasoning and decision now on my own future decisions.
However, I don't have nearly as much potential influence over humanity's x-risk strategy (causally or acausally) and the probability of an existential catastrophe. The typical individual has hardly any potential influence.
Also, separately, how would you decide who or what is included in the collective? Should we include the very agents creating the problems for us?