Below is the list of things which in my view could affect the wellbeing of all people, but which is not part of known to me research in EA. As I found these topics important but underexplored I naturally tried my best to look as deep as I can into them, so many of the suggested below ideas have links to my works.
- Use the Moon as a data storage about humanity. This data could be used by the next civilization on Earth and will help it to escape global catastrophes or even will help it to resurrect humans.
- Explore the dangers of passive SETI. We could download dangerous alien AI. See also a recent post by Matthew Barnett.
- Study of UAP and their relation to our future prospects and global risks.
- Plastination as an alternative to cryonics. Some forms of chemical preservation are much cheaper than cryonics and do not require maintenance.
- Prove that death is bad (from the preferential utilitarianism point of view), and thus we need to fight aging, strive for immortality and research the ways to resurrect the dead (unpublished working draft).
- Research the topic of so-called “quantum immortality”. Will it cause eternal sufferings to anyone, or it could be used to increase one's chances of immortality?
- Explore the ways how to resurrect the dead.
- New approaches to digital immortality and life-logging which is the cheapest way to immortality available to everyone. Explore active self-description as an alternative to life-logging.
- Explore how to “cure” past sufferings. Past sufferings are bad. If we have a time machine, it could be used to save past minds from sufferings. But also, we can save them by creating indexical uncertainty about their location, which will work similarly to a time-machine.
- Global chemical contamination as an x-risk. Seems to be underexplored.
- Anthropic effects of the expected probability of runaway global warming: our world is more fragile than we think and thus climate catastrophe is more probable. Unpublished draft.
- Plan B in AI safety. Let’s speak seriously about AI boxing and the best ways to do it.
- Dig deeper into the acausal deals and messaging to any future AI. The utility of killing humans is small for advanced superintelligent AI and adding any small value to our existence can help.
- How the future nuclear war will be different from the 20s century nuclear war scenarios?
- Explore and create refuges to survive a global catastrophe on an island or in a submarine. Create a general overview of surviving options. Surviving in caves. Surviving moisture greenhouse (unpublished draft).
- How to survive the end of the universe. We may have to make important choices before we start space colonization.
- Simulation: Experimental and theoretical research. Explore simulation termination risks. Explore types of evidence that we are in a simulation and analyze the topic of so-called “glitches in the matrix” – are they the evidence that we are in the simulation?
- Psychology of human values: do they actually exist as a stable set of preferences and what does psychology tell us about that?
- Doomsday argument: what if it is true after all? What can be done to escape its prediction?
- Explore the risks of wireheading as a possible cause of the civilizational decline.
On UAP and glitches in the matrix: I sometimes joke that, if we ever build something like a time machine, we should go back in time and produce those phenomena as pranks on our ancestors, or to "ensure timeline integrity." I was even considering writing an April Fool's post on how creating a stable worldwide commitment around this "past pranks" policy (or, similarly, committing to go back in time to investigate those phenomena and "play pranks" only if no other explanation is found) would, by EDT, imply lower probabilities of scary competing explanations for unexplained phenomena - like aliens, supernatural beings or glitches in the matrix. (another possible intervention is to write a letter to superintelligent descendants asking them to, if possible, go back in time to enforce that policy... I mean, you know how it goes)
(crap I just noticed I'm plagiarizing Interstellar!)
So it turns out that, though I find this whole subject weird and amusing, and don't feel particularly willing to dedicate more than half an hour to it... the reasoning seems to be sound, and I can't spot any relevant flaws. If I ever find myself having one of those experiences, I do prefer to think "I'm either hallucinating, or my grandkids are playing with the time machine again"
Actually, I am going to write someday a short post "time machine as existential risk".
Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence.