We implemented a Nash bargain solution in our moral parliament and I came away the impression that the results of Nash bargaining are very sensitive to your choice of defaults and for plausible defaults true bargains can be pretty rare. Anyone who is happy with defaults gets disproportionate bargaining power. One default might be 'no future at all', but that's going to make it hard to find any bargain with the anti-natalists. Another default might be 'just more of the same', but again, someone might like that and oppose any bargain that deviates much. Have you given much thought to picking the right default against which to measure people's preferences? (Or is the thought that you would just exclude obstinate minorities?)
Keeping the world around probably does that, so you should donate to Longtermist charities (especially because they potentially increase the number of people ever born, thus giving more people a chance of getting into heaven).
I often get the sense that people into fanaticism think that it doesn't much change what they actually should support. That seems implausible to me. Maybe you should support longtermist causes. (You probably have to contort yourself to justify giving any money to shrimp welfare.) But I would think the longtermist causes you should support will also be fairly different from 'mainstream' causes, and look rather weird close up. You don't really care if the species colonizes the stars and the future is full of happy people living great lives. If some sort of stable totalitarian hellscape offers a marginally better (but still vanishingly small) chance of producing infinite value, that is where you should put your money.
Maybe the best expected value would be to tile the universe with computers trying to figure out the best way to produce infinite value under every conceivable metaphysical scheme consistent with what we know and run them all until the heat death of the universe before trying to act. Given that most people are almost certainly not going to do that, you might think that we shouldn't be looking to build an aligned AI, we should want to build a fanatical AI.
Has your fanaticism changed your mind much about what is worth supporting?
But even a 10% chance that fish feel pain—and that we annually painfully slaughter a population roughly ten times the number of humans who have ever lived—is enough to make it a serious issue. Given the mind-bending scale of the harm we inflict on fish, even a modest chance that they feel pain is enough.
Completely in agreement here.
And while it’s possible that evolution produced some kind of non-conscious signal that produces identical behavior to pain, such a thing is unlikely. If a creature didn’t feel pain, it’s unlikely it would respond to analgesics, seek out analgesic drugs, and get distracted by bodily damage.
This is where I would disagree. I expect moderately-complicated creatures would develop traits like these under evolutionary pressures (except seeking out analgesic drugs). The question then is how likely is it that the best / only / easiest-to-evolve way to produce this slate of behaviors involves having a conscious experience with the relevant pain profile.
We know that human brains have undergone massive changes since our most recent common ancestor with fish, that terrestrial environments place very different demands on our bodies, that human beings have an unparalleled behavioral flexibility to address injuries, etc. so it is plausible that we do have fairly different nociceptive faculties. It seems to me like a pretty open question precisely how neurologically or algorithmically similar our faculties are and how similar they would need to be to for fish to qualify as having pain. The fact that we can't even tell how important the cortex is for pain in humans seems like strong evidence that we shouldn't be too confident about attributing pain to fish. We just know so little. Of course, we shouldn't be confident about denying it to them either, but much confidence either way seems unjustifiable.
I think of moral naturalism as a position where moral language is supposed to represent things, and it represents certain natural things. The view I favor is a lot closer to inferentialism: the meaning of moral language is constituted by the way it is used, not what it is about. (But I also don't think inferentialism is quite right, since I'm not into realism about meaning either.)
I guess I don't quite see what your puzzlement is with morality. There are moral norms which govern what people should do. Now, you might deny there in fact are such things, but I don't see what's so mysterious.
Another angle on the mystery: it is possible that there are epistemic norms, moral norms, prudential norms, and that's it. But if you're a realist, it seems like it should also be possible that there are hundreds of other kinds of norms that we're completely unaware of, such that we act in all sorts of wrong ways all the time. Maybe there are special norms governing how you should brush your teeth (that have nothing to do with hygiene or our interests), or how to daydream. Maybe these norms hold more weight than moral norms, in something like the way moral norms may hold more weight than prudential norms. If you're a non-naturalist, then apart from trust in a loving God, I'm not sure how you address this possibility. But it also seems absurd that I should have to worry about such things.
I consider myself a pretty strong anti-realist, but I find myself accepting a lot of the things you take to be problems for anti-realism. For instance:
But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.
I think that these things really are wrong and don't depend on what people think about it. But I also think that that statement is part of a language game dictated by complex norms and expectations. The significance of thought experiments. The need to avoid inconsistency. The acceptance of implications. The reliance on gut evaluations. Endorsement of standardly accepted implications. Etc. I live my life according to those norms and expectations, and they lead me to condemn slavery and think quite poorly of slavers and say things like 'slavery was a terrible stain on our nation'. I don't feel inclined to let people off the hook by virtue of having different desires. I'm quite happy with a lot of thought and talk that looks pretty objective.
I'm an anti-realist because I have no idea what sort of thing morality could be about that would justify the norms and expectations that govern our thoughts about morality. Maybe this is a version of the queerness argument. There aren't any sorts of entities or relations that seem like appropriate truth-makers for moral claims. I have a hard time understanding what they might be such that I would have any inclination to shift what I care about were I to learn that the normative truths themselves were different (holding fixed all of the things that currently guide my deployment of moral concepts). If my intuitions about cases were the same, if all of the theoretical virtues were the same, if the facts in the world were the same, but an oracle were to tell me that moral reality were different in some way -- turns out, baby torture is good! -- I wouldn't be inclined to change my moral views at all. If I'm not inclined to change my views except when guided by things like gut feelings, consistency judgments, etc. then I don't see how anything about the world can be authoritative in the way that realism should require.
I don’t think it’s even necessary to debate whether quantum phenomena manifest somehow at the macro level of the brain
You might think it is important that the facts about consciousness contribute to our beliefs about them in some way. Our beliefs about consciousness are surely a phenomenon of the macro level. So if our beliefs are somehow sensitive to the facts, and the facts consist of quantum effects, we should expect those quantum effects to generate some marcoscopic changes.
This is the sticking point for me with quantum theories: there doesn't seem to be any obvious mechanism for general quantum level truths to exert the kinds of very targeted influences that would be necessary for them to explain our beliefs about consciousness. And if they don't, then it seems like we're left with beliefs insensitive to the truth, and that is deeply unintuitive. What do you think?
Thanks for the suggestion. I'm interested in the issue of dealing with threats in bargaining.
I don't think we ever published anything specifically on the defaults issue.
We were focused on allocating a budget that respects the priorities of different worldviews. The central thing we were encountering was that we started by taking the defaults to be the allocation you get by giving everyone their own slice of the total budget and spending it as they wanted. Since there are often options that are well-suited to each different worldview, there is no way to get good compromises. Everyone is happier with the default than any adjustment of it. (More here.) On the other hand, if you switch the default to be some sort of neutral 0 value (assuming that can be defined), then you will get compromises, but many bargainers would rather that they just be given their own slice of the total budget to allocate.
I think the importance of defaults comes through just by playing around with some numbers. Consider the difference between setting the default to be the status quo trajectory we're currently on and setting the default to be the worst possible outcome. Suppose we have two worldviews, one of which cares about suffering in all other people linearly, and the other of which is very locally focused and doesn't care about immense suffering elsewhere. For the two worldviews, relative to the status quo, option A might give (worldview1: 2,worldview2: 10) value and option B might give (4,6) value. Against this default, option B has a higher product (24 vs 20) and is preferred by Nash bargaining. However, relative to the worst possible value default, option A might give (10,002, 12) and option B (10,004, 8), then option A would be preferred to option B (~120k vs 80k).