Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
If it was so straightforwardly irrational (dare I say it - insensible), Le Guin would presumably never have written the story in the first place!
This is bad reasoning. People vary radically in their ability to recognize irrationality (of various sorts). In the same way that we shouldn't be surprised if a popular story involves mathematical assumptions that are obviously incoherent to a mathematician, we shouldn't be surprised if a popular story involves normative assumptions that others can recognize as obviously wrong. (Consider how Gone with the Wind glorifies Confederate slavery, etc.)
It's a basic and undeniable fact of life that people are swayed by bad reasoning all the time (e.g. when it is emotionally compelling, some interests are initially more salient to us than others, etc.).
You have your intuitions and I have mine - we can each say they're obvious to us and it gets us no further, surely?
Correct; you are not my target audience. I'm responding here because you seemed to think that there was something wrong with my post because it took for granted something that you happen not to accept. I'm trying to explain why that's an absurd standard. Plenty of others could find what I wrote both accurate and illuminating. It doesn't have to convince you (or any other particular individual) in order to be epistemically valuable to the broader community.
If you find that a post starts from philosophical assumptions that you reject, I think the reasonable options available to you are:
(1) Engage in a first-order dispute, explaining why you think different assumptions are more likely to be true; or
(2) Ignore it and move on.
I do not think it is reasonable to engage in silencing procedural criticism, claiming that nobody should post things (including claims about what they take to be obvious) that you happen to disagree with.
[Update: struck-through a word that was somewhat too strong. But "not the sort of thing I usually expect to find on the forum" implicates more than just "I happen to disagree with this," and something closer to "you should not have written this."]
To be clear: the view I argued against was not "pets have net negative lives," but rather, "pets ought not to exist even if they have net positive lives, because we violate their rights by owning/controlling them." (Beneficentrism makes no empirical claims about whether pets have positive or negative lives on net, so it would make no sense to interpret me as suggesting that it supports any such empirical claim.)
It's not "circular reasoning" to note that plausible implications are a count in favor of a theory. That's normal philosophical reasoning - reflective equilibrium. (Though we can distinguish "sensible-sounding" from actually sensible. Not everything that sounds sensible at first glance will prove to be so on further reflection. But you'd need to provide some argument to undermine the claim; it isn't inherently objectionable to pass judgment on what is or isn't sensible, so objecting to that argumentative structure is really odd.)
I think it's very strange to say that a premise that doesn't feel obvious to you "is not the sort of thing [you] usually expect to find on the forum." (Especially when the premise in question would seem obvious common sense to, like, 99% of people.)
If an analogy helps, imagine a post where someone points out that commonsense requires us to reject SBF-style "double or nothing" existence gambles, and that this is a good reason to like some particular anti-fanatical decision theory. One may of course disagree with the reasoning, but I think it would be very strange for a bullet-biting Benthamite to object that this invocation of common sense was "not the sort of thing I usually expect to find on the forum." (If true, that would suggest that their views were not being challenged enough!)
(I also don't think it would be a norm violation to, say, argue that naive instrumentalism is a kind of "philosophical pathology" that people should try to build up some memetic resistance against. Or if it is, I'd want to question that norm. It's important to be able to honestly discuss when we think philosophical views are deeply harmful, and while one generally wants to encourage "generous" engagement with alternative views, an indiscriminate demand for universal generosity would make it impossible to frankly discuss the exceptions. We should be respectful to individual interlocutors, but it's just not true that every view warrants respect. An important part of the open exchange of ideas is openness to the question of which views are, and which are not, respectable.)
Sure, in principle. (Though I'd use a different term, like 'humane farms', to contrast with the awful conditions on what we call 'factory farms'.) The only question is whether second-order effects from accepting such a norm might generally make it harder for people to take animal interests sufficiently seriously -- see John & Sebo (2020).
The same logic would, of course, suggest there's no intrinsic objection to humanely farming extra humans for their organs, etc. (But I think it's clearly good for us to be appalled by that prospect: such revulsion seems part of a good moral psychology for protecting against gross mistreatment of people in other contexts. If I'm right about that, then utilitarianism will endorse our opposition to humane human farming on second-order grounds. Maybe something similar is true for non-humans, too -- though I regard that as more of an open question.)
Yeah, insofar as we accept biased norms of that sort, it's really important to recognize that they are merely heuristics. Reifying (or, as Scott Alexander calls it, "crystallizing") such heuristics into foundational moral principles risks a lot of harm.
(This is one of the themes I'm hoping to hammer home to philosophers in my next book. Besides deontic constraints, risk aversion offers another nice example.)
This is great!
One minor clarification (that I guess you are taking as "given" for this audience, but doesn't hurt to make explicit) is that the kind of "Within-Cause Prioritization" found within EA is very different from that found elsewhere, insofar as it is still done in service of the ultimate goal of "cross-cause prioritization". This jumped out at me when reading the following sentence:
A quick reading of EA history suggests that when the movement was born, it focused primarily on identifying the most cost-effective interventions within pre-existing cause-specific areas (e.g. the early work of GiveWell and Giving What We Can)
I think an important part of the story here is that early GiveWell (et al.) found that a lot of "standard" charitable cause areas (e.g. education) didn't look to be very promising given the available evidence. So they actually started with a kind of "cause prioritization", and simply very quickly settled on global poverty as the most promising area. This was maybe too quick, as later expansions into animal welfare and x-risk suggest. But it's still very different from the standard (non-EA) attitude of "different cause areas are incommensurable; just try to find the best charity within whatever area you happen to be personally passionate about, and don't care about how it would compare to competing cause areas."
That said, I agree with your general lesson that both broad cause prioritization and specific cross-cause prioritization plausibly still warrant more attention than they're currently getting!
Fun stuff!
The key question to assess is just: what credence should we give to Religious Catastrophe?
I think the right answer, as in Pascal's Mugging, is: vanishingly small. Do the arguments of the paper show that I'm wrong? I don't think so. There is no philosophical argument that favors believing in Hell. There are philosophical arguments for the existence of God. But from there, the argument relies purely on sociological evidence: many of the apes on our planet happen to accept a religious creed according to which there is Hell.
Here's a question to consider: is it conceivable that a bunch of apes might believe something that a rational being ought to give vanishingly low credence to?
I think it's very obvious that the answer to this question is yes. Ape beliefs aren't evidence of anything much beyond ape psychology.
So to really show that it's unreasonable to give a vanishingly low credence to Religious Catastrophe, it isn't enough to just point to some apes. One has to say more about the actual proposition in question to make it credible.
In what other context do philosophers think that philosophical arguments provide justified certainty (or near-certainty) that a widely believed philosophical thesis is false?
It probably depends who you ask, but fwiw, I think that many philosophical theses warrant extremely low credence. (And again, the mere fact of being "widely held" is not evidence of philosophical truth.)
No worries at all (and best wishes to you too!).
One last clarification I'd want to add is just the distinction between uncertainty and cluelessness. There's immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly "clueless" about how the various prospects balance out.
He expresses similar views in his recent interview with Peter Singer: