This is an excerpt from a short story contained in the paper, Vitrifying the Connectomic Self: A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure by Kenneth Hayworth of the Brain Preservation Foundation.

 

 The year is 2030 and you go in for a neurological exam after your spouse notices that you are displaying mild memory loss. MRI and blood tests verify that you are experiencing the early stages of Alzheimer's dementia. This is devastating news, especially since you know what is in store. Years before you had been the primary caregiver for your mother during the last five years of her life and watched as the same disease robbed her of her memories to the point where she was unable to recall even her closest loved ones, robbed her of her cognitive abilities to point where the once proud teacher could no longer tie her own shoes, and altered her personality so remarkably that it was unrecognizable. Every year you would take her in for an MRI scan and watch as her doctors showed you the progression of the disease. Looping through the yearly scans, you could literally see the disease shrinking her brain. The doctors would verify this quantitatively: "Her loss of brain volume this year was 3.1%". At the start of this grueling five year experience you had been comforted by the thought that your mother's immaterial soul would rise to heaven when the time eventually came. But in the end there was no such comfort since you had literally witnessed her soul eaten away a piece at a time in perfect synchrony with the loss of her brain tissue. Now you face that same fate and there is still no cure in sight.

Even a few years ago you would have had only two options: An early exit via euthanasia, or letting the disease take its course. But your doctors now offer you a third option: euthanasia by vascular perfusion with glutaraldehyde followed by long-term cryostorage---a procedure known as Aldehyde Stabilized Cryopreservation (ASC). Glutaraldehyde is a deadly chemical fixative that is used by neuroscientists to preserve the brains of animals prior to processing for electron and immunofluorescence microscopy. Perfusion of glutaraldehyde through the brain's vasculature almost instantly stops metabolic processes by covalently crosslinking cellular proteins into a sturdy mesh. Since life is a set of ongoing biochemical reactions this crosslinking results in immediate death, but it does so in a way that almost perfectly preserves the nano-scale structure of the brain. Fixation by glutaraldehyde is known to preserve the patterns of synaptic connections among neurons, and preserve the primary structure and relative locations of most proteins. As a results of this crosslinking, a glutaraldehyde fixed brain is immune to biological decay processes and will remain 'stable' for months, but eventually diffusion would result in the slow dislocation of biomolecules (e.g. membrane lipids) that were not crosslinked. For extremely long-term storage the glutaralehyde fixed brain is further perfused with a very high concentration of a cryoprotectant agent and brought to a temperature low enough to provide essentially indefinite storage.

You are not surprised that your doctor offers you this ASC option. The controversial new procedure has been all over the news for the last few years and, after a heated legal battle, ASC had recently been declared an acceptable method of euthanasia in the state you live in. On the face of it, it is an outlandish idea: fix your brain with a deadly chemical and store it in a static state for decades in the hope that future technology might be able to scan in your brain's information and revive you as a computer-emulated brain controlling a robotic body. Since childhood you had been fascinated by the idea of cryonics, intrigued by the idea of waking up in the far future to experience its wonders firsthand, and you vividly remember how disappointed you were when you learned how difficult real cryonics was---how much damage it caused to the brain's ultrastructure. But this new ASC technique was designed to overcome these limitations by chemically fixing the brain prior to the cryonics procedure, allowing the perfusion of cryoprotectants to be performed at room temperature over an extend length of time, thereby ensuring complete and uniform cryoprotectant concentration in every cell.

And the idea that you might wake up in the future as an emulated brain controlling a robotic body? When you initially heard of this idea, while watching the debates over ASC's legal adoption, it seemed patently absurd. "If such an emulated brain was even possible wouldn't it be 'just a copy' of me?", "I would still be dead wouldn't I?"  But the idea caught fire among the early-adopter 'Silicon Valley' crowd---the crowd you happen to work with. At work you are immersed in the world of artificial deep neural networks, networks that learn to drive cars, translate languages, recognize faces and objects, and that learn to play Chess and Go at superhuman levels. When your job is to build applications based on artificial brains it becomes easier to imagine yourself upgrading to an artificial substrate.

You decide to discuss your options with your coworkers. Unsurprisingly, for them the idea of waking up as a fully computer-emulated brain controlling a robotic body is literally the most attractive part of the ASC idea, and they proclaim, in all seriousness, that if the technology for mind uploading was available they would immediately sign up for the procedure. You ask them how they wrestle with the philosophical implications. Again, unsurprisingly, they embrace the idea that self-copies would be possible. They even discuss how being an emulated brain will allow one to 'fork' one's mind into two copies in the morning, live separate lives with separate conscious points of view during the day, and later in the evening 'merge the deltas' back into a single conscious self. After hours of discussions you admit that their enthusiasm has infected you as well. You decide that you will opt for the procedure, and, in consultation with your doctor, you set a tentative date for your ASC euthanasia. You set it for two years from now, before the most devastating decline will begin.

 

-4

0
0

Reactions

0
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

I object to the idea that early stage Alzheimer's is incurable. See the book The End of Alzheimer's.

This story did not make me a more effective or altruistic person, as far as I can tell.

I posted the story to let folks know of a possible altruistic target: letting people live as long as they want by vitrifying their nervous systems for eventual resuscitation.

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

I second this. Research in the area of cryonics could be an effective intervention, but proposing it in this way achieves nothing, since it doesn't do the actual work of assessing its impact per dollar. It doesn't even try.

I estimate it'll cost at least $1,000/yr to preserve a brain. That's about the cost of maintaining a family at global poverty levels.

I should have posted such calculations first before posting the excerpts. Thanks for your comments.

Interesting! How did you arrive at the $1,000/yr figure?

That's about the total annual cost of preserving a brain and spinal cord under an Alcor cryonics contract. I assume that the price paid while the patient are alive are roughly the same as the cost of preservation when dead.

To become part of EA, cryonics must become cheap, and to become cheap, it should be, imho, pure chemical fixation without cooling, which could cost only a few dollars per brain, something like aldehyde fixation without cryopreservation.

Pure chemical fixation without cooling would be ideal. The extra cryopreservation step is necessary since glutaraldehyde only fixes tissue for months rather than centuries.

I think that actual good step in EA direction would be to find a relatively cheap combination of chemicals which provide fixation for a longer term, or may be preserving brain slices (as Lenin's brain was preserved).

I am interested to write something about cryonics as a form EA, but the main problem here is price. Starting price of the funeral is 4000 pounds in UK and they are not much cheaper in poor countries. Cryonics should be cheaper to be successful and affordable.

More from oge
Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 6m read
 · 
This is a crosspost from my new Substack Power and Priorities where I’ll be posting about power grabs, AI governance strategy, and prioritization, as well as some more general thoughts on doing useful things.  Tl;dr I argue that maintaining nonpartisan norms on the EA Forum, in public communications by influential community members, and in funding decisions may be more costly than people realize. Lack of discussion in public means that people don’t take political issues as seriously as they should, research which depends on understanding the political situation doesn’t get done, and the community moves forward with a poor model of probably the most consequential actor in the world for any given cause area - the US government. Importantly, I don’t mean to say most community members shouldn’t maintain studious nonpartisanship! I merely want to argue that we should be aware of the downsides and do what we can to mitigate them.    Why nonpartisan norms in EA are a big deal Individual politicians (not naming names) are likely the most important single actors affecting the governance of AI. The same goes for most of the cause areas EAs care about. While many prominent EAs think political issues may be a top priority, and politics is discussed somewhat behind closed doors, there is almost no public discussion of politics. I argue the community’s lack of a public conversation about the likely impacts of these political actors and what to do in response to them creates large costs for how the community thinks about and addresses important issues (i.e. self-censorship matters actually). Some of these costs include:  * Perceived unimportance: I suspect a common, often subconscious, thought is, 'no prominent EAs are talking about politics publicly so it's probably not as big of a deal as it seems'. Lack of public conversation means social permission is never granted to discuss the issue as a top priority, it means the topic comes up less & so is thought about less, and i
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely