150 karmaJoined Sep 2019


Oh, sorry, by profiteers I was referring to people like forum lurkers and hostile open source researchers, not you at all. 

My thinking was that this plan works fine with or without funding so long as someone (e.g. you) coordinates it, but it can't be open-source on EAforum or Lesswrong because the bad guys (not journalists, the other bad guys) would get too much information out of it.

My current thinking about this is that EAforum and Lesswrong have confused, mentally ill, or profiteering people trying to do open source research and find ways to maximize damage to EA. 

As a result, aggregating criticism in an open and decentralized way will boost the adversary's epistemics in parallel, and is thus better done in an closed, in-person networked, and centralized way (I made the same mistake a couple years ago).

Answer by trevor1Jan 14, 20241

Raemon, a moderator on Lesswrong, recommends Scott Alexander's Superintelligence FAQ.

I'm not a scholar, but is it alright if I ask what the best source is to explain wild animal welfare to laypeople? I'm looking for something similar to the Superintelligence FAQ, but selected based on success at explaining wild animal welfare instead of AGI. I know a couple scholars but haven't introduced them yet and want to make sure I do it right. It's plausibly a valuable thing to standardize too.

The only sources I'm aware of are the home page of wildanimalsuffering.org, the 80,000 hours page on the topic, and Dylan Matthew's Vox article, and I have no idea which one has a higher success rate of explaining the concept in a way laypeople are able to take seriously. For example, the 80,000 hours page debunks the naturalistic fallacy quickly and efficiently, which indicates that the authors were serious about writing it well, but otherwise it's kinda sparse (maybe the authors put a lot of effort into making it short so it's easier to read and recommend?) and even tries to redirect people to farmed animal welfare instead.

If cryopreservation becomes mainstream, then that's literally it. Nobody dies, and all of humanity logrolls itself into raising the next generations to be friendly and create aligned AGI.

Even the total sociopaths participate to some degree (e.g. verbally support, often avoid obstruction if they are very powerful). If they don't have preserved loved ones to protect, they still need a friendly long-term future for themselves to be unfrozen into. They'll spend many more years alive in the future than the present anyway, because unfreezing a person is orders of magnitude harder than reversing aging or generating a new body for an unfrozen person.

Many other people have probably thought of this already. What am I missing?

Oops! I'm off my groove today, sorry. I'm going to go read up on some of the conflict theory vs. mistake theory literature on my backlog in order to figure out what went wrong and how to prevent it (e.g. how human variation and inferential distance causes very strange mistakes due to miscommunication).

What is "reference class skepticism"? This is the first time I've heard that phrase, I googled it and didn't find anything.

Planning to simply defy human nature doesn’t usually go very well.

Robin Hanson

Load more