KL

Kuutti Lappalainen

Chair of the Board @ Aalto Effective Altruism
27 karmaJoined Pursuing an undergraduate degree

Comments
1

Thanks for this post Aaron! I especially value the part about asymptotic structure instead of the sharp thresholds. 

I think it’s useful for those interested in the topic to highlight the connection of your post to existing EA discussion and academic literature on ethics and decision theory. Here are some of the things I’m aware of:

  • As @Robi Rahman🔸  points out, the post is arguing for an established view: lexical (negative) utilitarianism. I think The Center on Reducing Suffering’s critique of Toby Ord's blog post “Why I’m not a negative utilitarian” is a relevant discussion of negative utilitarianism that also touches on lexicality and can work as a “common misconceptions to avoid” for people new to the idea.
  • Like you, Academian takes a critical look at the vNM axioms and argues against continuity being required in a LessWrong post. They highlight the original lexicality paper: Melvin Hausner’s 1954 “Multidimensional utilities” that weakens the continuity axiom of vNM, producing lexicality.
  • Teo Ajantaival has an entire chapter in his book/sequence “Minimalist Axiologies” on “Doesn't this endorse destroying the world?”.
  • On a more general note, vNM hasn’t been the favored framework in normative decision theory for a while. The frameworks of Savage and then Jeffrey-Bolker have provided increasingly realistic/reasonable setups while keeping the utilities and most of the properties of the axioms. I think Richard Bradley’s 2017 book “Decision Theory with a Human Face” provides the canonical background for current normative decision theory.
  • In contrast to what you argue in footnote 21, completeness and transitivity have both been challenged (I think successfully). Especially Suzumura consistency and representor models (more on incompleteness in belief in Anthony DiGiovanni’s comment) are attractive alternatives as they are more appropriate for real agents like us and still invulnerable to value pumps.

 

Also, I think that in a precise baysian framework, the strongest argument against lexicality is that it is irrelevant because the EV of two options will almost never be exactly the same. This changes if we incorporate imprecision because then it is quite possible that the primary utility doesn’t provide a preference over two options and instead gives comparative indeterminacy which one could consider similar to indifference for the purpose of lexicality. That is to say, I think lexicality is more action-guidance relevant and conceptually attractive in an, arguably better, imprecise framework.