Point (c) regarding interest rates is quite tricky & subtle. At least, you should probably take into account something like the real interest rate instead of the nominal rate, because ceteris paribus it's much more plausible that only real (ie inflation adjusted) growth of your donation pool matters.
I would go even further and claim that, in expectation, you need to outperform the real risk free rate in order to generate a net benefit by donating later.
This means that unless you're happy to take some investment risk and aim to ouperform in real terms point (c) doesn't matter much.
However, imo if you're at an early point in your career, investing in your future career flexibility by having savings can have extremely high returns, which can outweigh all the other points.
Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.
Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):
* Unwillingness to let "noise" be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely "almost equally good" options? This is related to giving some value to "ambiguity aversion": I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?
* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.
* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.
* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) "parliamentary" [2] style approaches beyond moral to epistemic uncertainty?
* How does this interacts with "robustness to adverse selection" as opposed to mere "noise" – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?
[1] https://philpapers.org/rec/GREABA-8
[2] https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf
Yes, agree with all your points.
The reason I get a different allocation is indeed because I ultimately don't maximise - the outermost step is just averaging.
This is hard to justify philosophically, but the intuition is sort of "if my maximiser is extremely sensitive to ~noise, I throw out the maximiser and just average over plausible optimal solutions", which I think is in fact what people often do in different domains. (Where "noise" does a lot of work - of course I am very vague about what part of the probability distribution I'm happy to integrate out before the optimisation and which part I keep.)
Just to add: This is similar to taking the average over what many rational utility maximising agents with slightly different models/world views would do, so in some sense if many people followed this rule the aggregate outcome might be very similar to everyone optimising.
Yes, it appears that for long time horizons (>> 1000 years) there is no hope without theoretical arguments? So an important question (that longermism sort of has to address?) is what f(t) you should plug in in such cases when you have neither empirical evidence nor any developed theory.
But, as you write, for shorter horizons empirical approaches could be invaluable!
Thanks for the nice analysis.
I somehow have this (vague) intuition that in the very long time limit f(t) has to blow up exponentially and that this is a major problem for longetermism. This is sort of motivated by thinking about a branching tree of possible states of the world. Is this something people are thinking or have written about?
Thanks for sharing, this is cool!
Is it possible to see the code and/or maths somewhere? It would be pretty neat make a standardised implementations of different allocation methods broadly accessible! Additionally, many results are quite sensitive to subtle choices like default parameters, scales and non-linearities used to express marginal diminishing "returns", right?
(At first, I was a bit confused that the "Maximise Excpected Choiceworthiness" solution did not end up with 100% on one option, but then I saw that "Diminishing Marginal Returns" was switched on in "Settings". Is there any philosophical support for this non-linearity in the case of intertheoretic comparisons?)
edit: found it here: https://github.com/rethinkpriorities/moral-parliament/tree/master/server/allocate