Written in August 2024. I’ve only made some minor style edits before posting here. I thought this was worth sharing, besides being quite shallow.
Maximal cluelessness is a 2021 academic paper written by Andreas Mogensen (affiliated with the Global Priorities Institute).
This post won’t summarize his paper since this has already been nicely done by Nicholas Kruus and Andreas Mogensen (2024), which I therefore invite you to check.
I will, however, here include my usual short critical section (which is normally just something at the end of my summary but this time it’s just the main thing).
Important limitations of this work (in my opinion)
Like many others who’ve made the case for cluelessness, the author actually downplays “the depth of our uncertainty” by omitting essential elements we should arguably be particularly severely clueless about such as alien counterfactual considerations[1] and unknown crucial considerations[2]. This makes it easier for skeptics to argue that our credences shouldn’t be as indeterminate/imprecise as he suggests (see some comments under Kruss and Mogensen’s summary of the paper and some others under this linkpost).
- He introduces the Maximality rule without directly arguing why it may be better than “just trusting one’s (non-formalized) intuitions vis-a-vis what is good or most effective in the face of cluelessness”, which is the alternative people seem to default to (whether they are conscious of it or not).
- He doesn’t justify why he’s himself “inclined to believe [...] orthodox effective altruist conclusions about cause prioritization are all true“ despite his paper compellingly suggesting such beliefs are arbitrary, biased, and unwarranted since “downplaying the depth of our uncertainty” and of how clueless we actually should be.
- ^
See Guttman (2022); Tomasik (2015a, section What if human colonization is more humane than ET colonization?); MichaelA (2020); Buhler (2023, section The values of aliens may not be worse than that of our successors, which reduces the importance of (certain ways of) reducing X-risks); Brauner and Grosse-Haltz (2018, section Whether (post-)humans colonizing space is good or bad, space colonization by other agents seems worse); DiGiovanni (2021, section Cosmic rescues).
- ^