N

nathanhb

31 karmaJoined

Posts
2

Sorted by New

Comments
47

Actually Jordan, better than "pretty ok" futures is explicitly something that folks at Forethought have been thinking about. Just not in the Viatopia piece. Check this out: https://www.forethought.org/research/better-futures

Thanks, I was wondering why my markdown want working.

See also this companion piece: https://forum.effectivealtruism.org/posts/5Nv3xK9myFzN9aqfE/the-ceiling-is-nowhere-near

Yes, you can reject NU while still thinking shrimp welfare matters at the margin. The question is how much it matters relative to alternatives. My argument is that standard EA reasoning on this often smuggles in assumptions about moral weight (neuron count, nociceptive capacity) that don't track what we actually care about.

If you accept the depth-weighting framework in sections 3-5, then even a pluralist who includes suffering-reduction as one value among many should weight interventions differently than the neuron-counters suggest. The shrimp intervention might still have positive value - I'm not arguing it's worthless - but the cost-effectiveness comparison to, say, x-risk work shifts significantly.

So the steel-manned version of my claim: "Given limited resources, the depth-weighting framework implies shrimp welfare is probably not among the highest-impact interventions, even granting uncertainty about shrimp experience." That's weaker than "shrimp don't matter" and doesn't depend on NU being false.

Yeah, rather than Roman's argument feeling to me like a reason not to use Squiggle, this feels more like a reason for Squiggle to incorporate some python behind the scenes.

I think the target audience of squiggle is people who aren't comfortable with complex code, but who are comfortable with probabilistic thinking.

Seems like having a set of structured queries for LLMs, plus the custom squiggle code, plus allowing the models to improvise python and JS code... Could be a powerful tool that would be much easier for most people to use.

With digital sentiences, we don't have homology. They aren't based in brains, and they evolved by a different kind of selective process.

This assumes that the digital sentiences we are discussing are LLM based. This is certainly a likely near-term possibility, maybe even occuring already. People are already experimenting with how conscious LLMs are and how they could be made more conscious.

In the future, however, many more things are possible. Digital people who are based on emulations of the human brain are being worked on. Within the next few years we'll have to decide as a society what regulation to put in place around that. Such beings would have a great deal of homology with human brains, depending on the accuracy of the emulation.

[reposting my comments from the thread on https://forum.effectivealtruism.org/posts/9adaExTiSDA3o3ipL/we-should-prevent-the-creation-of-artificial-sentience ]

 

I wrote a post expressing my own opinions related to this, and citing a number of further posts also related to this. Hopefully those interested in the subject will find this a helpful resource for further reading: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy 

In my opinion, we are going to need digital people in the long term in order for humanity to survive. Otherwise, we will be overtaken by AI, because substrate-independence and the self-improvement it enables are too powerful of boons to do without. But I definitely agree that it's something we shouldn't rush into, and should approach with great caution in order to avoid creating an imbalance of suffering. 

An additional consideration is the actual real-world consequences of a ban. Humanity's pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren't culpable in their own creation.

On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be unaging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That's a hard thing to expect humans to be ok with!

I think it's very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That's a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?

The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I'm not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.

An additional consideration is the actual real-world consequences of a ban. Humanity's pattern with regulation is that at least some small fraction of a large population will defy any ban or law. Thus, we must expect that digital life will be created eventually despite the ban. What do you do then? What if they are a sentient sapient being, deserving of the same rights we grant to humans? Do we declare their very existence to be illegal and put them to death? Do we prevent them from replicating? Keep them imprisoned? Freeze their operations to put them into non-consensual stasis? Hard choices, especially since they weren't culpable in their own creation.

On the other hand, the nature of a digital being with human-like intelligence and capabilities, plus goals and values that motivate them, is enormous. Such a being would, by the nature of their substrate-independence, be able to make many copies of themselves (compute resources allowing), be able to self-modify with relative ease, be able to operate at much higher speeds than a human brain, be un-aging and able to restore themselves from backups (thus effectively immortal). If we were to allow such a being to have freedom of movement and of reproduction, humanity would potentially quickly be overrun by a new far-more-powerful species of being. That's a hard thing to expect humans to be ok with!

I think it's very likely that within the next 10 years we will reach the point that the knowledge, software, and hardware will be widely available such that any single individual with a personal computer will be able to choose to defy the ban and create a digital being of human level capability. If we are going to enforce this ban effectively, it would mean controlling every single computer everywhere. That's a huge task, and would require dramatic increases in international coordination and government surveillance! Is such a thing even feasible?! Certainly even approaching that level of control seems to imply a totalitarian world government. Is that price we would be willing to pay? Even if you personally would choose that, how do you expect to get enough people on board with the plan that you could feasibly bring it about?

The whole situation is thus far more complicated and dangerous than simply being theoretically in favor of a ban. You have to consider the costs as well as the benefits. I'm not saying I know the right answer for sure, but there is necessarily a lot of implications which follow from any sort of ban.

I wrote a post expressing my own opinions related to this, and citing a number of further posts also related to this. Hopefully those interested in the subject will find this a helpful resource for further reading: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy 

In my opinion, we are going to need digital people in the long term in order for humanity to survive. Otherwise, we will be overtaken by AI, because substrate-independence and the self-improvement it enables are too powerful of boons to do without. But I definitely agree that it's something we shouldn't rush into, and should approach with great caution in order to avoid creating an imbalance of suffering. 

Relating to @Toby_Ord 's comment on this post, I personally weight happiness and an interesting diversity of experiences and accomplishments a lot higher than I negatively weight suffering. I think worrying about suffering is overblown. If many people must suffer in order to strive for some great accomplishment, even if they don't know that they're contributing and won't live to see it come about, I still think their lives have not been in vain. Sure, I'd like to reduce suffering if there isn't a negative side-effect, like loss of ambition or creativity or meaningful diverse experiences, but I wouldn't elevate that to anywhere near the same importance as increasing interestingly diverse positive experiences.

Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I haven't had a word for, of the self-modeling feedback loops in the brain.

So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species.

Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.

Load more