WD

Wei Dai

4416 karmaJoined

Posts
7

Sorted by New
9
· · 1m read

Comments
264

I'm generally a fan of John Cochrane. I would agree that government regulation of AI isn't likely to work out well, which is why I favor an international pause on AI development instead (less need for government competence on detailed technical matters).

His stance on unemployment seems less understandable. I guess he either hasn't considered the possibility that AGI could drive wages below human subsistence levels, or think that's fine (humans just work for the same low wages as AIs and governments make up the difference with a "broad safety net that cushions all misfortunes")?

Oh, of course he also doesn't take x-risk concerns seriously enough, but that's more understandable for an economist who probably just started thinking about AI recently.

Vitalik Buterin: Right. Well, one thing is one domain being offence-dominant by itself isn’t a failure condition, right? Because defence-dominant domains can compensate for offence-dominant domains. And that has totally happened in the past, many times. If you even just compare now to 1,000 years ago: cannons are very offence-dominant and castles stopped them working. But if you compare physical warfare now to before, is it more offence-dominant on the whole? It’s not clear, right?

  1. How does defense-dominant domains compensate for offense-dominant domains? For example, defense-dominance in cyber-warfare doesn't seem to compensate for offense-dominance in bio-warfare, and vice versa. So what does he mean?
  2. Physical warfare is hugely offense-dominant today, if we count nuclear weapons. Why did he say "it's not clear"?

Overall it seems very unclear what Vitalik's logic is in this area, and I wish Robert had pushed him to think or speak more clearly.

I wish there was discussion about a longer pause (e.g. multi-decade), to allow time for human genetic enhancement to take effect. Does @CarlShulman support that, and why or why not?

Also I'm having trouble making sense of the following. What kind of AI disaster is Carl worried about, that's only a disaster for him personally, but not for society?

But also, I’m worried about disaster at a personal level. If AI was going to happen 20 years later, that would better for me. But that’s not the way to think about it for society at large.

Thanks for letting me know! I have been wondering for a while why AI philosophical competence is so neglected, even compared to other subareas of what I call "ensuring a good outcome for the AI transition" (which are all terribly neglected in my view), and I appreciate your data point. Would be interested to hear your conclusions after you've thought about it.

I liked your "Choose your (preference) utilitarianism carefully" series and think you should finish part 3 (unless I just couldn't find it) and repost it on this forum.

(I understand you are very busy this week, so please feel free to respond later.)

Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people.

I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don't know if it's special in a "magical" way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents' desires in an impartial way.

So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether ("zombie" is typically defined as "does not have conscious experience"), the upshot seems to be the same: I'm not very convinced of your illusionism, and if I were I still wouldn't update much toward desire satisfactionism.

I suspect there may be 3 cruxes between us:

  1. I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don't.
  2. I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
  3. I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.

I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.

I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don't think there's anything bad about "running roughshod" over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it's not an agent, or something else?

If you would bite the bullet, how would you weigh this agent's desires against other agents'? What specifically in your ethical theory prevents a conclusion like "we should tile the universe with some agent like this because that maximizes overall desire satisfaction?" or "if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?"

More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo's Choose your (preference) utilitarianism carefully. Otherwise it's liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn't lead to counterintuitive conclusions.

Have you considered working on metaphilosophy / AI philosophical competence instead? Conditional on correct philosophy about AI welfare being important, most of future philosophical work will probably be done by AIs (to help humans / at our request, or for their own purposes). If AIs do that work badly and arrive at wrong conclusions, then all the object-level philosophical work we do now might only have short-term effects and count for little in the long run. (Conversely if we have wrong views now but AIs correct them later, that seems less disastrous.)

The 2017 Report on Consciousness and Moral Patienthood by Muehlhauser assumes illusionism about human consciousness to be true.

Reading that, it appears Muehlhauser's illusionism (perhaps unlike Carl's although I don't have details on Carl's views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:

There is “something it is like” to be us, and I doubt there it is “something it is like” to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this “something it’s like”-ness.

I don’t want to have an argument about phenomenal consciousness in this thread

Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/qualia seem obviously real/existent in some ontological sense (since it can be differentiated/described by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I haven't thought a ton about this topic or dived into the literature, figuring that it's probably a hard problem that can't be conclusively resolved at this point.

The you that chooses is more fundamental than the you that experiences, because if you remove experience you get a blindmind you that will presumably want it back. Even if it can’t be gotten back, presumably **you **will still pursue your values whatever they were. On the other hand, if you remove your entire algorithm but leave the qualia, you get an empty observer that might not be completely lacking in value, but wouldn’t be you, and if you then replace the algorithm you get a sentient someone else.

Thus I submit that moral patients are straightforwardly the agents, while sentience is something that they can have and use.

If there is an agent that lost its qualia and wants to get them back, then I (probably) want to help it get them back, because I (probably) value qualia myself in an altruistic way. On the other hand, if there is a blindmind agent that doesn't have or care about qualia, and just wants to make paperclips or whatever, then I (probably) don't want to help them do that (except instrumentally, if doing so helps my own goals). It seems like you're implicitly trying to make me transfer my intuitions from the former to the latter, by emphasizing the commonalities (they're both agents) and ignoring the differences (one cares about something I also care about, the other doesn't), which I think is an invalid move.

Apologies if I'm being uncharitable or misinterpreting you, but aside from this, I really don't see what other logic or argumentative force is supposed to make me, after reading your first paragraph, reach the conclusion in your second paragraph, i.e., decide that I now want to value/help all agents, including blindminds that just want to make paperclips. If you have something else in mind, please spell it out more?

Here he is following a cluster of views in philosophy that hold that consciousness is not necessary for moral status. Rather, an entity, even if it is not conscious, can merit moral consideration if it has a certain kind of **agency: **preferences, desires, goals, interests, and the like.

The articles you cite, and Carl himself (via private discussion) all cite the possibility that there is no such thing as consciousness (illusionism, "physicalist/zombie world") as the main motivation for this moral stance (named "Desire Satisfactionism" by one of the papers).

But from my perspective, a very plausible reason that altruism is normative is that axiologically/terminally caring about consciousness is normative. If it turns out that consciousness is not a thing, then my credence assigned to this position wouldn't all go into desire satisfactionism (which BTW I think has various problems that none of the sources try to address), and would instead largely be reallocated to other less altruistic axiological systems, such as egoism, nihilism, and satisfying my various idiosyncratic interests (intellectual curiosity, etc.). These positions imply caring about other agents' preferences/desires only in an instrumental way, via whatever decision theory is normative. I'm uncertain what decision theory is normative, but it seems quite plausible that this implies I should care relatively little for certain agents' preferences/desires, e.g., because they can't reciprocate.

So based on what I've read so far, desire satisfactionism seems under motivated/justified.

Load more