S

Silvan

18 karmaJoined

Comments
4

Maybe I'm reading it wrong but, isn't the current table in "Counting the number of neurons" wrong?

Animal advocates typically focus on sentience in order to base their moral claims. This similarity with humans can easily justify some form of similar (moral) treatment of animals and humans. And under the assumption that pain and pleasure are the sole things that have moral worth, extremely high numbers of sentient animals and their suffering has an overwhelming effect on moral considerations.

As a result, if one only considers the headcounts and the sufferings, the obvious conclusion is: total animal welfare >>> total human welfare.

But one needs to keep in mind that the immediate conclusion of the “unrestricted sentience approach” is not only that animal welfare >>> human welfare. It is rather:

total invertebrate welfare >>>>>>>>>> total rest of the animal welfare >>> human welfare .

This is precisely because of the same reason: total invertebrate welfare is also overwhelming due to their astronomical numbers.

I don’t think most people realise this or really adjust their actions/positions accordingly. If your main starting point is exclusively sentience and hedonism, your vote in the debate should not be only “%100 agree with 100 million dollars spent on animal welfare”, it should also be “%99.99 agree with 100 million dollars spent on invertebrate welfare”.

One might reasonably think that this moral weight framework should be wrong. “Unless you have confidence in the ruler's reliability, if you use a ruler to measure a table, you may also use the table to measure the ruler”.

There might be other goods that cannot be adequately reduced to pain and pleasure: like friendship, knowledge, play, reason, etc. And these may have even more moral weight than mere pleasure and pain. Humans might have much higher capacity to actualize these different goods and therefore have higher status due to their nature. Some animals can also actualize these goods in their own limited capacities which can also justify some form of moral hierarchy between mammals, birds and invertebrates. This can then justify more (at least some) attention to non-invertabrate welfare as well. This can then also justify more (or at least some) attention to human welfare, despite overwhelming numbers of animals.

 

Thanks for your honesty Engin.   This section truly reflects my doubts about animal welfare, which I guess has little to do with cost effectiveness or monitorability.. but more about the shadow of the the repugnant conclusion.   The fear that we could end up prioritizing moths over humans simply because we keep insisting that the only thing that reflects value in the world is doing arithmetics with pain and pleasure.

I tried to express some of these fears in https://forum.effectivealtruism.org/posts/QFh6kiwv36mR8QSiE/are-we-as-rigorous-in-addressing-utilitarianism-s 

Thank you very much Sean for your response, specially  I found Minimalist extended very repugnant conclusions are the least repugnant interesting, thou I feel that still kind of misses the broader point (or bite the wrong bullet) about how you can't really do "arithmetics with phenomenology" to begin with, in this way in which I think Parfit makes apparent in Overpopulation and The Quality of Life.

Good luck with your plans for quantifying our consciences so that they could be used in a quantitative decision making process , thou I'm afraid that anything close to that is going to be very hard until we somehow solve the "easy problem" of consciousness, (and not sure even if then..).

Thanks @Richard Y Chappell🔸 , I truly enjoyed that one (you’re right that all this leans more towards ethical theory or normative ethics than metaethics; my apologies for the slip. I particularly resonated with:

That said, I do think the view contains some under-appreciated insights that are worth taking on board, at least under the remit of “moral uncertainty”. For those concerned about the Repugnant Conclusion, I think perfectionism at least offers a better alternative than bleak “negative” views that deny any positive value to our existence.

Moreover, I find the implicit critique of hedonism extremely compelling, and find that reflecting on Nietzschean perfectionism moves me more strongly towards some form of objective list theory of well-being. I think welfare objectivism is a view that EAs ought to take very seriously, and it especially ought to lead us to want to (i) rule out wireheading and other “cheap” hedonistic futures as involving unacceptable axiological risk, given how poorly such futures score on plausible non-hedonistic views

I completely agree that moving towards an objective list theory may not only be plausible but crucial, given the risks of overlooking the possibility that it may be closer to the truth.

In any case, this is precisely the type of topic and nuance that I find lacking in most EA discussions,  I find it surprising that considering such important questions is often not even seen as a possibility.

Are posts like this, then, a rarity within the EA context? Are there any sub-communities, study groups, or institutions that focus seriously on these types of issues? (I assume there aren’t, as you likely would have mentioned them, but I remain surprised)

Additionally, if you have any other references to essays or articles that explore different types of perfectionism as a potential solution to some of the challenges posed by the repugnant conclusion, I would greatly appreciate it .

Thanks again!