S

simon

34 karmaJoined

Posts
2

Sorted by New
2
simon
· · 1m read

Comments
7

Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.

Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):

* Unwillingness to let "noise" be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely "almost equally good" options? This is related to giving some value to "ambiguity aversion": I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?

* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.

* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.

* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) "parliamentary" [2] style approaches beyond moral to epistemic uncertainty?

* How does this interacts with "robustness to adverse selection" as opposed to mere "noise" – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?

[1] https://philpapers.org/rec/GREABA-8
[2] https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf 

Yes, agree with all your points.
The reason I get a different allocation is indeed because I ultimately don't maximise - the outermost step is just averaging. 
This is hard to justify philosophically, but the intuition is sort of "if my maximiser is extremely sensitive to ~noise, I throw out the maximiser and just average over plausible optimal solutions", which I think is in fact what people often do in different domains. (Where "noise" does a lot of work - of course I am very vague about what part of the probability distribution I'm happy to integrate out before the optimisation and which part I keep.)

Just to add: This is similar to taking the average over what many rational utility maximising agents with slightly different models/world views would do, so in some sense if many people followed this rule the aggregate outcome might be very similar to everyone optimising.
 

Yes, it appears that for long time horizons (>> 1000 years) there is no hope without theoretical arguments? So an important question (that longermism sort of has to address?) is what f(t) you should plug in in such cases when you have neither empirical evidence nor any developed theory. 
But, as you write, for shorter horizons empirical approaches could be invaluable!

For an interesting take on the (important) argument around statistical power:
Gelman's The “What does not kill my statistical significance makes it stronger” fallacy:
https://statmodeling.stat.columbia.edu/2017/02/06/not-kill-statistical-significance-makes-stronger-fallacy/

Thanks for the nice analysis.
I somehow have this (vague) intuition that in the very long time limit f(t) has to blow up exponentially and that this is a major problem for longetermism. This is sort of motivated by thinking about a branching tree of possible states of the world. Is this something people are thinking or have written about?

Have there ever been any efforts to try to set up EA-oriented funding organisations that focus on investing donations in such a way as to fund high-utility projects in very suitable states of the world? They could be pure investment vehicles that have high expected utility, but that lose all their money by some point in time in the modal case.

The idea would be something like this:

For a certain amount of dollars, to maximise utility, to first order, one has to decide how much to spend on which causes and how to distribute the spending over time. 

However, with some effort, one could find investments that pay off conditionally on states of the world where specific interventions might have very high utility. Some super naive examples would be a long-dated option structure that pays off if the price for wheat explodes, or a CDS that pays off if JP Morgan collapses. This would then allow organisations to intervene through targeted measures, for example, food donations.

This is similar to the concept of a “tail hedge” - an investment that pays off massively when other investments do poorly, that is when the marginal utility of owning an additional dollar is very high.

Usually, one would expect such investments to carry negatively, that is, to be costly over time possibly even with negative unconditional expected returns. However, if an EA utility function is sufficiently different from a typical market participant, this need not be the case, even in dollar terms (?).

Clearly, the arguments here would have to be made a lot more rigorous and quantitative to see whether this might be attractive at all. I’d be interested in any references etc.

simon
24
12
0

Somehow, a mental model of him that appears reasonably compatible with a lot of his actions and this interview is ~ "he always does or says what he thinks is locally and situationally optimal in terms of presenting himself, but never considers any more macro or long-term picture, or even consistency or consequences". 

He almost appears surprised by how far this has gotten him?
Based on this, I would not believe anything he says here (or in general)?