DS

Derek Shiller

Researcher @ Rethink Priorities
2457 karmaJoined Derekshiller.com

Comments
155

I would think the trend would also need to be evenly distributed. If some groups have higher-than-replacement birth rates, they will simply come to dominate over time.

I think of moral naturalism as a position where moral language is supposed to represent things, and it represents certain natural things. The view I favor is a lot closer to inferentialism: the meaning of moral language is constituted by the way it is used, not what it is about. (But I also don't think inferentialism is quite right, since I'm not into realism about meaning either.)

I guess I don't quite see what your puzzlement is with morality. There are moral norms which govern what people should do. Now, you might deny there in fact are such things, but I don't see what's so mysterious.

Another angle on the mystery: it is possible that there are epistemic norms, moral norms, prudential norms, and that's it. But if you're a realist, it seems like it should also be possible that there are hundreds of other kinds of norms that we're completely unaware of, such that we act in all sorts of wrong ways all the time. Maybe there are special norms governing how you should brush your teeth (that have nothing to do with hygiene or our interests), or how to daydream. Maybe these norms hold more weight than moral norms, in something like the way moral norms may hold more weight than prudential norms. If you're a non-naturalist, then apart from trust in a loving God, I'm not sure how you address this possibility. But it also seems absurd that I should have to worry about such things.

I consider myself a pretty strong anti-realist, but I find myself accepting a lot of the things you take to be problems for anti-realism. For instance:

But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.

I think that these things really are wrong and don't depend on what people think about it. But I also think that that statement is part of a language game dictated by complex norms and expectations. The significance of thought experiments. The need to avoid inconsistency. The acceptance of implications. The reliance on gut evaluations. Endorsement of standardly accepted implications. Etc. I live my life according to those norms and expectations, and they lead me to condemn slavery and think quite poorly of slavers and say things like 'slavery was a terrible stain on our nation'. I don't feel inclined to let people off the hook by virtue of having different desires. I'm quite happy with a lot of thought and talk that looks pretty objective.

I'm an anti-realist because I have no idea what sort of thing morality could be about that would justify the norms and expectations that govern our thoughts about morality. Maybe this is a version of the queerness argument. There aren't any sorts of entities or relations that seem like appropriate truth-makers for moral claims. I have a hard time understanding what they might be such that I would have any inclination to shift what I care about were I to learn that the normative truths themselves were different (holding fixed all of the things that currently guide my deployment of moral concepts). If my intuitions about cases were the same, if all of the theoretical virtues were the same, if the facts in the world were the same, but an oracle were to tell me that moral reality were different in some way -- turns out, baby torture is good! -- I wouldn't be inclined to change my moral views at all. If I'm not inclined to change my views except when guided by things like gut feelings, consistency judgments, etc. then I don't see how anything about the world can be authoritative in the way that realism should require.

I don’t think it’s even necessary to debate whether quantum phenomena manifest somehow at the macro level of the brain

You might think it is important that the facts about consciousness contribute to our beliefs about them in some way. Our beliefs about consciousness are surely a phenomenon of the macro level. So if our beliefs are somehow sensitive to the facts, and the facts consist of quantum effects, we should expect those quantum effects to generate some marcoscopic changes.

This is the sticking point for me with quantum theories: there doesn't seem to be any obvious mechanism for general quantum level truths to exert the kinds of very targeted influences that would be necessary for them to explain our beliefs about consciousness. And if they don't, then it seems like we're left with beliefs insensitive to the truth, and that is deeply unintuitive. What do you think?

Also, it is worrying if the optimists easily find financial opportunities that depend on them not changing their minds. Even if they are honest and have the best of intentions, the disparity in returns to optimism is epistemically toxic.

Yeah, that's right. Some kinds of mitigation will increase risks later (e.g. a pause), and the model doesn't accommodate such nuance.

Could you link the most relevant piece you are aware of? What do you mean by "independently"? Under hedonism, I think the probability of consciousness only matters to the extent it informs the probability of valences experiences.

The idea is more aspirational. I'm not really sure of what to recommend in the field, but this is a pretty good overview: https://arxiv.org/pdf/2404.16696

Interesting! How?

Perhaps valence requires something like the assignment of weights to alternative possibilities. If you can look inside the AI and confirm that it is making decisions in a different way, you can conclude that it doesn't have valenced experiences. Valence plausibly requires such assignments of weights (most likely with a bunch of other constraints), and the absence of one requirement is enough to disconfirm something. Of course, this sort of requirement is likely to be controversial, but it is less open to radically different views than consciousness itself.

Not at the moment. Consciousness is tricky enough as it is. The field is interested in looking more closely at valence independently of consciousness, given that valence seems more tractable and you could at least confirm that AIs don't have valenced experience, but that lies a bit outside our focus for now.

Independently, we're also very interested in how to capture the difference between positive and negative experiences in alien sorts of minds. It is often taken for granted based on human experience, but it isn't trivial to say what it is.

This more or less conforms to why I think trajectory changes might be tractable, but I think the idea can be spelled out in a slightly more general way: as technology develops (and especially AI), we can expect to get better at designing institutions that perpetuate themselves. Past challenges to affecting a trajectory change come from erosion of goals due to random and uncontrollable human variation and the chaotic intrusion of external events. Technology may help us make stable institutions that can continue to promote goals for long periods of time.

Lots of people think about how to improve the future in very traditional ways. Assuming the world keeps operating under the laws it has been for the past 50 years, how do we steer it in a better direction?

I suppose I was thinking of this in terms of taking radical changes from technology development seriously, but not in the sense of long timelines or weird sources of value. Far fewer people are thinking about how to navigate a time when AGI becomes commonplace than are thinking about how to get to that place, even though there might not be a huge window of time between them.

Load more