DC

Dustin Crummett

361 karmaJoined

Comments
19

So I understand: are you denying that the life with a tiny bit of positive welfare and no negative welfare, or the life with a tiny bit of positive welfare and a tiny tiny tiny tiny tiny bit of negative welfare, is determinately net positive? If so, I think that is an important crux. I don't see why that would be.

I guess it had better not be a question of whether, as a matter of actual fact, I have the brainpower to do the exercise (with my eyes closed!). Babies, I assume, have no concept of their own non-existence, and so can't compare any state they're in to non-existence, yet they can have positive or negative welfare. Or someone who lives long enough will not be able to remember, much less bring to mind, everything that's happened in their life, yet they can have a positive or negative welfare. So what matters is, if anything, some kind of idealized comparison I may or may not be able to do in actual fact. (And in any event, I guess the argument here would not be that nematodes have indeterminate welfare because their range is small, but rather that they do because they are stupid.)

What I'm suggesting could be the case is a situation where, say, the correct weighting of X vs Z is not a precise ratio but a range--anything between 7.9:1 and 8:1, let's say for the sake of argument--such that the actual ratio falls into this indeterminate range, and a small change in either direction will not cause a departure from the range. I see how that could perhaps be the case. But that kind of indeterminacy is orthogonal to the size of the welfare range. It would still hold if the values were .087455668741 and .011024441253 or 87455668741 and 11024441253, and wouldn't hold if the values were .087455668741 and .010024441253.

So my view is that if I have (X,Y,Z) at (0,0,0), which is equal to nonexistence, then (.01,0,0) is positive and (-.01,0,0) is negative. Why wouldn't it be? Why wouldn't a life with a slight positive and no negatives be positive? And presumably, say, (.01,0,-.00000001) will also be positive.

I think people frequently conflate there being no reason for something and there being very little reason. E.g., they'll say "there is no evidence for a flat earth" when there is obviously some evidence for it (that some people believe in it is some evidence). If people say (.01,0,0) is not better than non-existence, I'd suspect that's what they're doing.

As far as I can see, there just isn't such a thing as a neutral range. An individual could have an arbitrarily small welfare range and still have determinately positive or negative net welfare, or (I am open to the possibility of) an arbitrarily large welfare range while being such that it's indeterminate whether their net welfare is positive or negative. And so noting that nematodes have small welfare ranges doesn't tell us anything about this in and of itself.

I guess I'm not getting how this responds to my point. Suppose my welfare range (understood as representing the range of positive and negative experiences I can have) goes from -.01 to .01. I say I might have determinately positive welfare because, as a matter of fact, all, or the predominant majority of, my experiences are slightly positive. Otoh, suppose my range goes from -1000 to 1000. I say (I am open to the possibility that) it might be indeterminate whether I have positive welfare because I have a bunch of importantly different types of positive and negative experiences that are kind of closely matched without a uniquely correct weighting. So the indeterminacy is not related to the size of the welfare range but rather to having importantly different types of positive and negative experiences that are kind of closely matched without a uniquely correct weighting, or something like that. It could still be that it's indeterminate whether nematodes have positive or negative welfare, but that won't be just because their welfare range is small.

What's your answer to that?

I don't quite see the connection here between having a small welfare range and having an indeterminate welfare sign. Suppose a being is only capable of having very slightly positive experiences. Then it has a very small welfare range but it seems to me that its welfare range is determinately positive. It has positive and no negative experiences.

There is some plausibility to the idea that there may not be uniquely correct ways of weighing different experiences against each other. E.g., perhaps there is no uniquely correct answer to how many seconds of a pleasant breeze outweigh 60 minutes of a boring lecture, or how many minutes of the intellectual enjoyment of playing chess outweigh the sharp pain of a bad papercut, even if there are incorrect answers (maybe one second of the breeze is definitely not enough to outweigh the lecture). This may be plausible in light of the Ruth Chang-type intransitivity arguments: if I am indifferent between X seconds of the breeze and 60 minutes of the lecture, I might also be indifferent between X + 1 seconds of the breeze and 60 minutes of the lecture even though I obviously prefer X+1 seconds of the breeze to just X seconds, and it's not clear that this is merely an epistemic issue. If, as came up in your discussion with Vasco, someone wants to understand one's experience outweighing another's as being a matter of what you would prefer (rather than a realist understanding on which the outweighing comes first and rational preferences will follow), this perhaps seems especially plausible, as I doubt our preferences about these things are, as a matter of descriptive psychology, always perfectly fine-grained.

in that case, I could see it being the case that it's sometimes indeterminate whether a being has positive or negative welfare because it has lots of very different types of experiences which sort of come out closely matched with no uniquely correct weighting. But that is orthogonal to the size of the welfare range: that could turn out to be true even if the individual experiences are really (dis)valuable.

In many cases, it seems very doubtful whether further research into whether animals are conscious will be action-guiding in any meaningful way. Further research into whether chickens are conscious, say, will not produce definitive certainty that they are or aren't. (Finding out whether your favorite theory of consciousness predicts that they are conscious is only so useful, as we should have significant uncertainty about the right theory of consciousness.) And moderate changes in your credence probably shouldn't affect what you should do. E.g., if your credence in chicken consciousness drops 20%, there is still the moral risk argument for acting as if chickens are conscious; if you have some reason for rejecting that argument, that was probably also a reason when your credence was 20% higher. And at the same time, there are potentially very great opportunity costs to waiting to act--costs that don't make sense if decision-relevant information isn't actually likely to come in.

The EarlyModernTexts versions are what I read when I had to do my comprehensive history exam in grad school. I recommend them.

I accept the bullet biting response. I think someone who doesn't should say the utility of the observers may outweigh Jones' utility but that you should save Jones for some deontic reason (which is what Scanlon says), or maybe that many small bits of utility spread across people don't sum in a straightforward way, and so can't add up to outweigh Jones' suffering (I think this is incorrect, but that something like it is probably what's actually driving the intuition). I think the infinite disutility response is wrong, but that someone who accepts it should probably adopt some view in infinite ethics according to which two people suffering infinite disutility is worse than one--adopting some such view may be needed to avoid other problems anyway.

The solution you propose is interesting, but I don't think I find it plausible:

1. If Jones' disutility is finite, presumably there is some sufficiently large number of spectators, X, such that their aggregate utility would outweigh his disutility. Why think that, in fact, the physically possible number of observers is lower than X?

2. Suppose Jones isn't suffering the worst torment possible, but merely "extremely painful" shocks, as in Scanlon's example. So the number of observers needed to outweigh his suffering is not X, but the lower number Y. I suppose the intuitive answer is still that you should save him. But why think the physically possible number of observers is below Y?

3. Even if, in fact, the physically possible number of observers is lower than X, presumably the fundamental moral rules should work across possible worlds. And anyway, that seems to be baked into the thought experiment, as there is in fact no Galactic Cup. But even if the physically possible number of observers is in fact lower than X, it could be higher than X in another possible world.

4. Even if the possible number of observers is in fact finite, presumably there are possible worlds with an infinite number of possible observers (the laws of physics are very different, or time is infinite into the future, or there are disembodied ghosts watching, etc.). If we think the solution should work across possible worlds, the fact that there can only be a finite number of observers in our world is then irrelevant.

5. You assume our lightcone is finite "with certainty." I assume this is because of the expected utility concern if there is some chance that it turns out not to be finite. But I think you shouldn't have epistemic certainty that there can only be a finite number of observers. 

6. The solution seems to get the intuitive answer for a counterintuitive reason. People find letting Jones get shocked in the transmitter case counterintuitive because they think there is something off about weighing one really bad harm against all these really small benefits, not because of anything having to do with whether there can only be a finite number of observers, and especially not because of anything having that could depend on the specific number of possible observers. Once we grant that the reason for the intuition is off, I'm not sure why we should trust the intuition itself.

*I think your answer to 1-3 may be that there is no set-in-stone number of observers needed to outweigh Jones' suffering: we just pick some arbitrarily large amount and assign it to Jones, such that it's higher than the total utility possessed by however many observers there might happen to be. I am a realist about utility in such a way that we can't do this. But anyway, here is a potential argument against this:

Forget about what number we arbitrarily assign to represent Jones' suffering. Two people each suffering very slightly less than Jones is worse than Jones' suffering. Four people each suffering very slightly worse than them is worse than their suffering. Etc. If we keep going, we will reach some number of people undergoing some trivial amount of suffering which, intuitively, can be outweighed by enough people watching the Galactic Cup--call that number of observers Z. The suffering of those trivially suffering people is worse than the suffering of Jones, by transitivity. So the enjoyment of Z observers outweighs the suffering of Jones, by transitivity. And there is no reason to think the actual number of possible observers is smaller than Z.

Different ways of calculating impact make sense in different contexts. What I want to say is that the way Singer, MacAskill, GiveWell are doing it (i) is the one you should be using in deciding whether/where to donate (at the very least assuming you aren't in some special collective action problem, etc.) and (ii) one that is totally fine by ordinary standards of speech--it isn't deceptive, misleading, excessively imprecise, etc. Maybe we agree.

The criticism of the concept of "effective altruism," and the second main criticism to the extent that it's related to it, also feels odd to me. Altruism in the sense of only producing good is not realistically possible. By writing your Wired article, say, it is overwhelmingly likely that you set off a chain of events that will cause a different sperm to fertilize a different egg, which will set off all sorts of other chains of events meaning that in hundreds of years we will have different people, different weather patterns, different all sorts of stuff. So writing this will cause untold deaths, untold suffering, that wouldn't have occurred otherwise. So too for your friend Aaron in the Wired article helping the people on the island, and so too for anything else you or anyone might do.

So either altruism is something other than doing only good, or altruism is impossible and the most we can hope for is some kind of approximation. It wouldn't follow that maximizing EV is the best way to be (or approximate being) altruistic, but the mere fact that the actions EAs take are like all other actions in that they have some negative consequences is not in itself much of a criticism.

Load more