P

PeterSinger

271 karmaJoined

Posts
1

Sorted by New

Comments
17

As it happens, more or less simultaneously with this AMA, there is a Pea Soup discussion going on in response to a text about my views by Johann Frick.  My response to Johann is relevant to this question, even though it doesn't use the satisficing terminology.  But do take a look:

https://peasoupblog.com/2024/07/johann-frick-singer-without-utilitarianism-on-ecumenicalism-and-esotericism-in-practical-ethics/#comment-28935

I'm going to stop answering your questions now, as I've got other things I need to do as well as the Pea Soup discussion, including preparing for the next interview for the Lives Well Lived podcast I am doing with Kasia de Lazari-Radek.  If you are not familiar with it, check it out on Apple Podcasts, Spotify, etc etc.  We have interviews up with Jane Goodall, Yuval Harari, Ingrid Newkirk, Daniel Kahneman (sadly, recorded shortly before his death) and others.  

But here is some good news - you can try asking your questions to Peter Singer AI!  Seriously - become a paid subscriber to my Substack, and it's available now (and, EAs, all funds raised will be donated to The Life You Can Save's recommended charities).  Eventually we will open it up to everyone, but we want to test it first and would value your comments.

https://boldreasoningwithpetersinger.substack.com/

Thanks for all the questions, and sorry that I can't answer them all.

Peter

In practice, no.  For example, I am willing to bite the bullet on saying that torture is not always wrong - the case of the terrorist who has planted a nuclear bomb in a big city that will detonate in a few hours, unless we torture his small child in front of him.  How much weight should I give to the possibility that, for example, torture is always wrong, even if it is the only way to prevent a much greater amount of suffering?  I have no idea.  I'm not clear how - in the absence of a divine being and who has commanded us not to do it - it could be wrong, in such circumstances. And I don't give any serious credence to the existence of such a being.

I give more credence to the idea that some insects, and a wider range of crustaceans than just lobsters and crabs, are sentient and therefore must be inside my moral circle.  But see my reply to "justsaying" above - I still have no idea what their suffering would be like, and therefore how much weight to give it.  (Of course, the numbers count too.)  

The things that most people can see are good, and which therefore would bring more people into the movement.  Like finding the best ways to help people in extreme poverty, and ending factory farming (see my above answer to what I would do if I were in my twenties).  

One common objection to what The Life You Can Save and GiveWell are doing - recommending the most effective charities to help people in extreme poverty - is that this is a band-aid, and doesn't get at the underlying problems, for which structural change is needed. I'd like to see more EAs engaging with that objection, and assessing paths to structural changes that are feasible and likely to make a difference.

It's really hard to know what relative weights to give chickens, and harder still with shrimp or insects.  The Rethink Priorities weights could be wrong by orders of magnitude, but they might also be roughly correct.  

Re the Meat Eater Problem (see Michael Plant's article in the Journal of Controversial Ideas) I don't think we will get to a better, kinder world by letting people die from preventable, poverty-related conditions.  A world without poverty is more likely to come around to caring about animals than one in which some are wealthy and others are in extreme poverty.

I don't claim that this is an adequate answer to the dilemma you sketch for someone with my views.  It's a good topic for further thought.

Good question, but I don't have a good answer.  My answer is more pragmatic than principled (see, for example, my previous response to Devon Fritz's question about what EA is getting most wrong.)

Placing too much emphasis on longtermism.  I'm not against longtermism at all - it's true that we neglect future sentient beings, as we neglect people who are distant from us, and as we neglect nonhuman animals. But it's not good for people to get the impresson that EA is mostly about longtermism. That impression hinders the prospects of EA becoming a broad and popular movement that attracts a wide range of people, and we have an important message to get across to those people: some ways of doing good are hundreds of times more effective than others.

My impression, by the way, is that this lesson has been learned, and longtermism is less prominent in discussions of EA today than it was a couple of years ago.  But I could be wrong about that.

If you want a more concrete example of what Parfit took to be an irreducibly normative truth, it might be that the fact that if I do X, someone will be in agony is a reason against doing X (not necessarily a conclusive reason, of course). 

When Parfit said that if there are no such truths, nothing would matter, he meant that nothing would matter in an objective sense.  It might matter to me, of course.  But it wouldn't really matter.  I agree with that, although I can also see that the fact that something matters to me, or to those I love and care about, does give me a reason not to do it.  For more discussion, see the collection of essays I edited, Does Anything Really Matter (Oxford, 2017).  The intention, when I conceived this volume, was for Parfit to reply to his critics in the same volume, but his reply grew so long that it had to be published separately, and it forms the bulk of On What Matters, Volume Three.  

Getting too far ahead of where most people are - for example, by talking about insect suffering. It's hard enough, at present, to get people to care about chickens or fish. We need to focus on areas in which many people are already on our side, and others can be persuaded to come over.  Otherwise, we aren't likely to make progress, and without occasional concrete gains for animals, we won't be able to grow the movement.

Load more