A

AndyMcKenzie

128 karmaJoined

Comments
23

Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification. 

I'm not disagreeing with you that there is a possibility, however small, of s-risk scenarios. I agree with this point of yours, although I'm thinking of things more like superhuman persuasion, deception, pinning human tribes against one another, etc., rather than nanobots necessarily: 

If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]

People often bring up this in the context of brain preservation. But it just seems to me that this possibility is mostly a generalized argument against life extension, medicine, pronatalism, etc in general. 

I'm not sure I understand the scenario you are discussing. In your scenario, it sounds like you're positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don't understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time. 

Good point. This is true for those who believe this, but it applies to any form of medicine or life extension, right? Not just brain preservation. So for someone who holds this view, theoretically it might also apply to the antimalarial medication case as well?

I agree aging research is under-invested in and that research here has the potential to lead to many QALYs in the future. However, I would generalize this cause area to longevity, because I think brain preservation/cryonics is also neglected and should also be a part of this. See: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible

I agree with you that pure software AGI is very likely to happen sooner than brain emulation.

I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.

I think that given the possibility of brain emulation, the division between AIs and humans you are drawing here may not be so clear in the longer term. Does that play into your model at all, or do you expect that even human emulations with various cognitive upgrades will be totally unable to compete with pure AIs? 

I asked GPT-3 your question 10 times. Answers: 
- Hitler 7

- Judas Iscariot 1

- Napolean Bonaparte 1

- Genghis Khan 1

I then tried to exclude Hitler by saying "Aside from Adolf Hitler" and asked this 10 times as well (some answers gave multiple people). Answers: 

- Stalin 5

- Mao Zedong 3

- Pol Pot 2

- Christopher Columbus 1

- Bashar al-Assad 1

The answer to the bonus questions is basically always of the form: "The obvious counterfactual to this harm is that Stalin never came to power, or that he was removed from power before he could do any damage. The ideal counterfactual is that Stalin never existed. As for what an ambitious, altruistic, and talented person at the time could have done to mitigate this harm, it is difficult to say. More hypothetically, an EA-like community could have worked to remove Stalin from power, or to prevent him from ever coming to power in the first place."

Not sure how helpful this is, but perhaps it is interesting to get a sense of what the "typical" answer might be. 

I think I basically agree that if someone can identify a way to reduce extinction risk by 0.01% for $100M-1B, then that would be a better use of marginal funds than the direct effects of brain preservation. 

Great post. I fully agree that this seems to be a worthwhile area of funding. Although it was written too soon to be included in the Open Phil prize, I wrote a post on a similar topic here: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible

I wonder if the EA community feels they have already spent too many "weirdness points" on other areas -- mainly AGI x-risk alignment research -- and don't want to distribute them elsewhere. Evidence for this would be that other new cause areas that get criticized as "sci-fi" or people use the absurdity heuristic to discount would be selected against; evidence against it would be the opposite. 

It's also possible that the EA community doesn't think it's a very good idea for technical reasons, although in that case, you would at least expect to see arguments against it or research funded into whether it could work. 

Load more