I'm a quantitative biologist with a background in evolutionary theory, microbiome data science, and metagenomics methods development. I co-lead the [Nucleic Acid Observatory project](https://naobservatory.org), which seeks to develop a metagenomics-based early warning system for future pandemics.
In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler's Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven't actually read this book but believe it is the best known book of this sort. Here's another similar-seeming book by Dan Sloughter.
It seems that Sandberg is discussing something like this typology in https://www.youtube.com/watch?v=Wn2vgQGNI_c
Edit: Sandberg starts talking about three categories of hazards at ~12:00
Hi Ajeya, thanks for doing this and for your recent 80K interview! I'm trying to understand what assumptions are needed for the argument you raise in the podcast discussion on fairness agreements that a longtermist worldview should have been willing to trade up all its influence for ever-larger potential universe. There are two points I was wondering if you could comment on if/how these align with your argument.
My intuition says that the argument requires a prior probability distribution on universe size that has an infinite expectation, rather than just a prior with non-zero probability on all possible universe sizes with a finite expectation (like a power-law distribution with k > 2).
But then I figured that even in a universe that was literally infinite but had a non-zero density of value-maximizing civilizations, the amount of influence over that infinite value that any one civilization or organization has might still be finite. So I'm wondering if what is needed to be willing to trade up for influence over ever larger universes is actually something like the expectation E[V/n] being infinite, where V = total potential value in universe and n = number of value-maximizing civilizations.
I have very little skin in the game here, as I don't personally have a strong desire for an acronym...but my 2 cents are that "Reasoning carefully" can be shortened to "Reasoning" (or "Reason") for this purpose with no loss - the "careful" part is implied. And I think I identify more with the idea of using careful reasoning than rationality. "Reason(ing)" also matches an existing short definition of EA as "Using reason and evidence to do the most good" (currently the page title for effectivealtruism.org)
Excellent post; I did not read it carefully enough to evaluate much of the details, but these are all things we are concerned with at the Nucleic Acid Observatory and I think your three "Reasons" is a great breakdown of the core issues.