Below is the list of things which in my view could affect the wellbeing of all people, but which is not part of known to me research in EA. As I found these topics important but underexplored I naturally tried my best to look as deep as I can into them, so many of the suggested below ideas have links to my works. 

  1. Use the Moon as a data storage about humanity. This data could be used by the next civilization on Earth and will help it to escape global catastrophes or even will help it to resurrect humans.
  2. Explore the dangers of passive SETI. We could download dangerous alien AI. See also a recent post by Matthew Barnett.
  3. Study of UAP and their relation to our future prospects and global risks.
  4. Plastination as an alternative to cryonics. Some forms of chemical preservation are much cheaper than cryonics and do not require maintenance.
  5. Prove that death is bad (from the preferential utilitarianism point of view), and thus we need to fight aging, strive for immortality and research the ways to resurrect the dead (unpublished working draft).
  6. Research the topic of so-called “quantum immortality”. Will it cause eternal sufferings to anyone, or it could be used to increase one's chances of immortality?
  7. Explore the ways how to resurrect the dead.
  8. New approaches to digital immortality and life-logging which is the cheapest way to immortality available to everyone. Explore active self-description as an alternative to life-logging.
  9. Explore how to “cure” past sufferings. Past sufferings are bad. If we have a time machine, it could be used to save past minds from sufferings. But also, we can save them by creating indexical uncertainty about their location, which will work similarly to a time-machine.
  10. Global chemical contamination as an x-risk. Seems to be underexplored.
  11. Anthropic effects of the expected probability of runaway global warming: our world is more fragile than we think and thus climate catastrophe is more probable. Unpublished draft.
  12. Plan B in AI safety. Let’s speak seriously about AI boxing and the best ways to do it.
  13. Dig deeper into the acausal deals and messaging to any future AI. The utility of killing humans is small for advanced superintelligent AI and adding any small value to our existence can help.
  14. How the future nuclear war will be different from the 20s century nuclear war scenarios?
  15. Explore and create refuges to survive a global catastrophe on an island or in a submarine. Create a general overview of surviving options. Surviving in caves. Surviving moisture greenhouse (unpublished draft).
  16. How to survive the end of the universe. We may have to make important choices before we start space colonization.
  17. Simulation: Experimental and theoretical research. Explore simulation termination risks. Explore types of evidence that we are in a simulation and analyze the topic of so-called “glitches in the matrix” – are they the evidence that we are in the simulation?
  18. Psychology of human values: do they actually exist as a stable set of preferences and what does psychology tell us about that?
  19. Doomsday argument: what if it is true after all? What can be done to escape its prediction?
  20. Explore the risks of wireheading as a possible cause of the civilizational decline.
Comments21


Sorted by Click to highlight new comments since:

Could you elaborate why we have to make choices before space colonisation if we want to survive beyond the end of the last stars? Until now, my opinion is that we can can "start solving heat death" a billion years in the future while we have to solve AI alignment in the next 50 - 1000 years.

Another thought of mine is that it is probably impossible to resurrect the dead by computing how the state of each neuron of a deceased person was at the time of her/his death. I think, you need to measure the state of each particle in the present with a very high precision and/or the computational requirements for a backward simulation are much too high. Unfortunately, I cannot provide a detailed mathematical argument. This would be an interesting research project; even if the only outcome is that a small group of people should change their cause area.     

If we start space colonisation, we may not be able to change goal-system of the spaceships that we will send to stars, as they will move away with near-light speed. So we need to specify what we will do with the universe before starting the space colonisation: either we will spend all resources to build as many simulations with happy minds as possible – or we will reorganise matter in the ways with will help to survive the end of the universe, e.g. building Tipler's Omega point or building worm hole into another universe.

---

Very high precision of brain details is not needed for resurrection as we every second forget our mind state. So only a core of long-term memory is sufficient to preserve what I call "information identity", which is necessary conditions for a person to regard himself as the same person, say, next day. But the whole problem of identity is not solved yet, and it would be a strong EA cause to solve it: we want to help people in the ways which will not destroy their personal identity, if that identity really matters. 

Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.     

If we simulate all possible universes, we can do it. It is enormous computational task, but it can be done via acausal cooperation between different branches of multiverse, where each of them simulate only one history.

I see two problems with your proposal:

  1. It is not clear if a simulation of you in a patch of spacetime that is not causally connected to our part of the universe is the same as you. If you care only about the total amount of happy experiences, this would not matter, but if you care about personal identity, it becomes a non-trivial problem. 
  2. You probably assume that the multiverse is infinite. If this is the case, you can simply assume that for every copy of you that lives for N years another copy of you that lives for N+1 years appears somewhere by chance. In that case there would be no need to perform any action.

I am not against your ideas, but I am afraid that there are many conceptual and physical problems that have to solved before. What is even worse is that there is no universally accepted method how to resolve this issues. So a lot of further research is necessary. 

1.The identity problems is known to be difficult, but here I assume that continuity of consciousness is not needed for it. Only informational identity is enough.

2. The difference between quantum - or big world- immortality is that we can select which minds to create and exclude N+1 moments which are damages or suffering. 

Let us assume that a typical large but finite volume contains  happy simulations of you and  suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.    

I think that there is way to calculate relative probabilities even in infinite case and it will converge to 1:. For example, there is an article "The watchers of multiverse" which suggest a plausible way to do so. 
 

Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting. 

On UAP and glitches in the matrix: I sometimes joke that, if we ever build something like a time machine, we should go back in time and produce those phenomena as pranks on our ancestors, or to "ensure timeline integrity." I was even considering writing an April Fool's post on how creating a stable worldwide commitment around this "past pranks" policy (or, similarly, committing to go back in time to investigate those phenomena and "play pranks" only if no other explanation is found) would, by EDT, imply lower probabilities of scary competing explanations for unexplained phenomena - like aliens, supernatural beings or glitches in the matrix. (another possible intervention is to write a letter to superintelligent descendants asking them to, if possible, go back in time to enforce that policy... I mean, you know how it goes)

(crap I just noticed I'm plagiarizing Interstellar!)

So it turns out that, though I find this whole subject weird and amusing, and don't feel particularly willing to dedicate more than half an hour to it... the reasoning seems to be sound, and I can't spot any relevant flaws. If I ever find myself having one of those experiences, I do prefer to think "I'm either hallucinating, or my grandkids are playing with the time machine again"

Actually, I am going to write someday a short post "time machine as existential risk".

 Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence. 

Thank you for this list. 

#2:  I left a comment on Matthew’s post that I feel is relevant: https://forum.effectivealtruism.org/posts/CRvFvCgujumygKeDB/my-current-thoughts-on-the-risks-from-seti?commentId=KRqhzrR3o3bSmhM7c

#16: I gave a talk for Mathematical Consciousness Science in 2020 that covers some relevant items: I’d especially point to 7,8,9,10 in my list here: https://opentheory.net/2022/04/it-from-bit-revisited/

#18+#20: I feel these are ultimately questions for neuroscience, not psychology. We may need a new sort of neuroscience to address them. (What would that look like?)

SensorLog is an app that lets you continuously record iPhone sensor data as stream to a file or web server. You might use it as a convenient form of life logging. Presumably, resurrection is easier if the intelligence doing it has lots of info about your location, movements, environment, etc.

Thanks, I do a lot of lifelogging, but didn't know about this app.

Just curious: Could you make the case for resurrecting people instead of just creating new ones? (Agree that having more persons with positive welfare is desirable but don't see why resurrection would be the most cost-effective.)

Humans have strong preference not die, and they -many of them -would like to be resurrected if it will be possible and will be done with high quality. I am supporter of the preferential utilitarianism, so I care not only of the number of happy of observer-moments, but also about what people really want.

Anyway, resurrecting  is a limited task: only 100 billion people ever lived, and resurrecting them all will not preclude as of creating of trillions of trillions new happy people.

Also, mortal being can't be really happy. So new people need to be immortal or they will suffer of existential dread.

Interesting, thanks! Though I don't see why you'd only ressurect humans since animals seem to have the preference to survive as well. Anyways, I think preferences are often misleading and are not a good proxy for what would really be fulfilling. To me it also seems odd to say that a preference remains even if the person is no longer existing. Do you believe in souls or how do you make that work? (Sorry for the naivety, happy about any recs on the topic)

I support animal resurrection too, but only after all humans will be resurrected. Again starting from most complex and close to human animals, like pets, primates. Also, it seems that some animals will be resurrected before humans, like mammoth, nematodes and some pets.

When I speak about human preferences, I mean current preferences: people do not want to die now and many prefer that they will be resuscitated if no damage.

Not OP, but it seems reasonable that if you perform an action to help someone, and that person then agrees in retrospect that they preferred this to happen, that can be seen as "fulfilling a preference". 

For a mundane example, imagine I'm ambivalent about mini-golfing. But you know me, and you suspect I'll love it, so you take me mini-golfing. Afterwards, I enthusiastically agree that you were right, and I loved mini-golfing. I see this as pretty similar to me saying beforehand "I love mini-golfing, I wish someone would go with me", and you fulfilling my preference by taking me. In both cases, the end result is the same, even though I didn't actually have a preference for mini-golfing before. 

Similarly, even though it is impossible for a dead person to have a preference, I think that if you bring someone back to life and they then agree that this was a fantastic idea and they're thrilled to be alive, that would be morally equivalent to fulfilling an active preference to live.

Thanks for the explanation!

I agree that it is great to do something to people for which they will be thankful later. But newly created people seem just as good for this and if you care a lot about preferences you could create them in a way that they will be very thankful and the pure creation is fulfilling for them. Still don't see the value of resurrection vs new people. I think my main problem with preference utilitarianism is that you can't say whether it's good or bad to create preferences since both has unintuitive conseqences.

For a mundane example, imagine I'm ambivalent about mini-golfing. But you know me, and you suspect I'll love it, so you take me mini-golfing. Afterwards, I enthusiastically agree that you were right, and I loved mini-golfing.

It seems you can accommodate this just as well, if not better, within a hedonistic view—you didn't prefer to go mini-golfing, but mini-golfing made you happier once you tried it, so that's why you endorse people encouraging you to try new things. (Although I'm inclined to say, it really depends on what you would've otherwise done with your time instead of mini-golfing, and if someone is fine not wanting something, it's reasonable to err on the side of not making them want it.)

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies