WD

Wei Dai

4391 karmaJoined

Posts
7

Sorted by New
9
· · 1m read

Comments
262

I wish there was discussion about a longer pause (e.g. multi-decade), to allow time for human genetic enhancement to take effect. Does @CarlShulman support that, and why or why not?

Also I'm having trouble making sense of the following. What kind of AI disaster is Carl worried about, that's only a disaster for him personally, but not for society?

But also, I’m worried about disaster at a personal level. If AI was going to happen 20 years later, that would better for me. But that’s not the way to think about it for society at large.

Thanks for letting me know! I have been wondering for a while why AI philosophical competence is so neglected, even compared to other subareas of what I call "ensuring a good outcome for the AI transition" (which are all terribly neglected in my view), and I appreciate your data point. Would be interested to hear your conclusions after you've thought about it.

I liked your "Choose your (preference) utilitarianism carefully" series and think you should finish part 3 (unless I just couldn't find it) and repost it on this forum.

(I understand you are very busy this week, so please feel free to respond later.)

Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people.

I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don't know if it's special in a "magical" way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents' desires in an impartial way.

So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether ("zombie" is typically defined as "does not have conscious experience"), the upshot seems to be the same: I'm not very convinced of your illusionism, and if I were I still wouldn't update much toward desire satisfactionism.

I suspect there may be 3 cruxes between us:

  1. I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don't.
  2. I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
  3. I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.

I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.

I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don't think there's anything bad about "running roughshod" over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it's not an agent, or something else?

If you would bite the bullet, how would you weigh this agent's desires against other agents'? What specifically in your ethical theory prevents a conclusion like "we should tile the universe with some agent like this because that maximizes overall desire satisfaction?" or "if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?"

More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo's Choose your (preference) utilitarianism carefully. Otherwise it's liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn't lead to counterintuitive conclusions.

Have you considered working on metaphilosophy / AI philosophical competence instead? Conditional on correct philosophy about AI welfare being important, most of future philosophical work will probably be done by AIs (to help humans / at our request, or for their own purposes). If AIs do that work badly and arrive at wrong conclusions, then all the object-level philosophical work we do now might only have short-term effects and count for little in the long run. (Conversely if we have wrong views now but AIs correct them later, that seems less disastrous.)

The 2017 Report on Consciousness and Moral Patienthood by Muehlhauser assumes illusionism about human consciousness to be true.

Reading that, it appears Muehlhauser's illusionism (perhaps unlike Carl's although I don't have details on Carl's views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:

There is “something it is like” to be us, and I doubt there it is “something it is like” to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this “something it’s like”-ness.

I don’t want to have an argument about phenomenal consciousness in this thread

Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/qualia seem obviously real/existent in some ontological sense (since it can be differentiated/described by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I haven't thought a ton about this topic or dived into the literature, figuring that it's probably a hard problem that can't be conclusively resolved at this point.

The you that chooses is more fundamental than the you that experiences, because if you remove experience you get a blindmind you that will presumably want it back. Even if it can’t be gotten back, presumably **you **will still pursue your values whatever they were. On the other hand, if you remove your entire algorithm but leave the qualia, you get an empty observer that might not be completely lacking in value, but wouldn’t be you, and if you then replace the algorithm you get a sentient someone else.

Thus I submit that moral patients are straightforwardly the agents, while sentience is something that they can have and use.

If there is an agent that lost its qualia and wants to get them back, then I (probably) want to help it get them back, because I (probably) value qualia myself in an altruistic way. On the other hand, if there is a blindmind agent that doesn't have or care about qualia, and just wants to make paperclips or whatever, then I (probably) don't want to help them do that (except instrumentally, if doing so helps my own goals). It seems like you're implicitly trying to make me transfer my intuitions from the former to the latter, by emphasizing the commonalities (they're both agents) and ignoring the differences (one cares about something I also care about, the other doesn't), which I think is an invalid move.

Apologies if I'm being uncharitable or misinterpreting you, but aside from this, I really don't see what other logic or argumentative force is supposed to make me, after reading your first paragraph, reach the conclusion in your second paragraph, i.e., decide that I now want to value/help all agents, including blindminds that just want to make paperclips. If you have something else in mind, please spell it out more?

Here he is following a cluster of views in philosophy that hold that consciousness is not necessary for moral status. Rather, an entity, even if it is not conscious, can merit moral consideration if it has a certain kind of **agency: **preferences, desires, goals, interests, and the like.

The articles you cite, and Carl himself (via private discussion) all cite the possibility that there is no such thing as consciousness (illusionism, "physicalist/zombie world") as the main motivation for this moral stance (named "Desire Satisfactionism" by one of the papers).

But from my perspective, a very plausible reason that altruism is normative is that axiologically/terminally caring about consciousness is normative. If it turns out that consciousness is not a thing, then my credence assigned to this position wouldn't all go into desire satisfactionism (which BTW I think has various problems that none of the sources try to address), and would instead largely be reallocated to other less altruistic axiological systems, such as egoism, nihilism, and satisfying my various idiosyncratic interests (intellectual curiosity, etc.). These positions imply caring about other agents' preferences/desires only in an instrumental way, via whatever decision theory is normative. I'm uncertain what decision theory is normative, but it seems quite plausible that this implies I should care relatively little for certain agents' preferences/desires, e.g., because they can't reciprocate.

So based on what I've read so far, desire satisfactionism seems under motivated/justified.

Therefore, it seems clear to us that we need to immediately prioritize and fund serious, non-magical research that helps us better understand what features predict whether a given system is conscious

Can you talk a bit about how such research might work? The main problem I see is that we do not have "ground truth labels" about which systems are or are not conscious, aside from perhaps humans and inanimate objects. So this seemingly has to be mostly philosophical as opposed to scientific research, which tends to progress very slowly (perhaps for good reason). Do you see things differently?

Another podcast linked below with some details about Will and Toby's early interactions with the Rationality community. Also Holden Karnofsky has an account on LW, and interacted with the Rationality community via e.g. this extensively discussed 2011 post.

https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/

Will MacAskill: But then the biggest thing was just looking at what are the options I have available to me in terms of what do I focus my time on? Where one is building up this idea of Giving What We Can, kind of a moral movement focused on helping people and using evidence and data to do that. It just seemed like we were getting a lot of traction there.

Will MacAskill: Alternatively, I did go spend these five-hour seminars at Future of Humanity Institute, that were talking about the impact of superintelligence. Actually, one way in which I was wrong is just the impact of the book that that turned into — namely Superintelligence — was maybe 100 times more impactful than I expected.

Rob Wiblin: Oh, wow.

Will MacAskill: Superintelligence has sold 200,000 copies. If you’d asked me how many copies I expected it to sell, maybe I would have said 1,000 or 2,000. So the impact of it actually was much greater than I was thinking at the time. But honestly, I just think I was right that the tractability of what we were working on at the time was pretty low. And doing this thing of just building a movement of people who really care about some of the problems in the world and who are trying to think carefully about how to make progress there was just much better than being this additional person in the seminar room. I honestly think that intuition was correct. And that was true for Toby as well. Early days of Giving What We Can, he’d be having these arguments with people on LessWrong about whether it was right to focus on global health and development. And his view was, “Well, we’re actually doing something.”

Rob Wiblin: “You guys just comment on this forum.”

Will MacAskill: Yeah. Looking back, actually, again, I will say I’ve been surprised by just how influential some of these ideas have been. And that’s a tremendous testament to early thinkers, like Nick Bostrom and Eliezer Yudkowsky and Carl Shulman. At the same time, I think the insight that we had, which was we’ve actually just got to build stuff — even if perhaps there’s some theoretical arguments that you should be prioritising in a different way — there are many, many, positive indirect effects from just doing something impressive and concrete and tangible, as well as the enormous benefits that we have succeeded in producing, which is tens to hundreds of millions of bed nets distributed and thousands of lives saved.

Load more