In this article I argue that moral offsetting is not inherently immoral and that, as long as it's implemented well, it can have a positive impact. I also explain that certain kinds of offsets wouldn't work but that others might, and I speculate that offsetting meat consumption would probably be feasible and have a positive impact. What do you think?

PS: If you hit a paywall, you can read the article for free here, but if you like it please consider tipping me.

12

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

Interesting article - thanks for sharing. My main problem with it has to do with the moral psychology piece. You write that: 

It's "disgusting and counterintuitive" for most people to imagine offsetting murder.

and 

"Most of us still live in extremely carnist cultures and are bombarded with burger ads and sights of people enjoying meat next to us all the time like it is perfectly harmless."

In my opinion, these two arguments together make meat offsets a bad idea. People are opposed to murder offsets (no matter how theoretically effectively they may be) because murder feels like a deeply immoral thing to do. However, most people feel that eating meat is not deeply immoral - most people do it every day. I'd imagine folks react the same way to meat offsets as they do to carbon offsets. They think, "well I know I probably shouldn't eat so much meat / consume so much carbon, but I'm not gonna stop, so this offset makes some sense". But this is the wrong way to think about eating meat (and perhaps consuming carbon, too, but that's beside the point). We want people to feel that eating meat is immoral; we want them to feel that it's a form of killing a sentient being. And the availability of an offset trivializes the consumption.

I'm on board with your consequentialist reasoning here, but I'm worried the availability meat offsets may cause people's moral opinion on animal ethics to regress.

Forewarning: I have not read your post (yet).

I argue that moral offsetting is not inherently immoral

(I'm probably just responding to a literal interpretation of what you wrote rather than the intended meaning, but just in case and to provide clarity:) I'm not aware of anyone who argues that offsetting itself is immoral (though EAs have pointed out Ethical offsetting is antithetical to EA).

Rather, the claim that I've seen some people make is that (some subset of) the  actions that would normally be impermissible (like buying factory farmed animal products or hiring an assassin) can be permissible if the person doing the action engages in the right kind of offsetting behavior, such as donating money to prevent factory farmed animal suffering or preventing an assassination.

I bring up the assassination example because we'd pretty much all agree that that hiring an assassin is impermissible regardless of what offsetting behavior one does to try to right this wrong. For people who agree that hiring an assassin is wrong regardless of any offsetting behavior, but think there are some other kinds of generally impermissible actions (e.g. buying animal products) that become permissible when one engages in a certain offsetting behavior, I'd be interested in hearing what you think the difference is that makes it apply to the one behavior but not to the hiring of the assassin. (If this is what the OP blog post does, let me know and I'll give it a read.)

I'm also curious if there are less controversial examples than buying animal products where most people agree that offsetting behavior is sufficient to make a generally impermissible action permissible.

As you imagined, the blog post does respond to your argument. If you don't think the response is satisfactory, I'd be curious to hear your thoughts :)

[anonymous]5
0
1

ACE has 4 top charities

Wild Animal Initiative -> not involved with livestock , irrelevant to discussion

Faunalytics -> involved in research and making research more accesible, impact valuable but very hard to measure in the way that makes “ offsetting” work as a concept .

Good Food Institute -> plant-based-alternative industry lobbying group, probably useless, considering There is no association between rising plant-based-meat sales and lowered meat sales.

The Humane League->mainly focused on “ improving” livestock welfare ( not decreasing the number of animals farmed) , has been a major force in the “ cage-free” push even-though Industrial-cage-free Egg farms tend to have higher mortality rates ( meaning more farmed animals per Kcal) than conventional farms. almost certainly net negative and should be tossed to the side.

You complain about hypotheticals far removed from reality, and then offer up one. There is no EA recognized organization that you could possibly use to offset the number of animals raised for your animal-product consumption. Donating to an ACE top charity means one of these four, one bad, one useless, one irrelevant, and one to difficult to quantify the impact of. People who claim to be offsetting There meat consumption ( is any one actually doing this???) are not.

I agree that donating to an ACE top charity doesn't mean offsetting. I didn't mean to suggest that, I'm sorry if it sounded like that. What I mean is that it should be in principle possible to offset meat consumption. I didn't get into the practicalities of how this would actually work for the sake of brevity, but I can do it here:

Imagine a food delivery app that works like this:

  • When people  buy vegan/vegetarian food, in the checkout process they have an option to donate to a meat offset fund. This option can be checked by default with a suggested donation amount.
  • When people are ordering food with meat, in the checkout process they have the option to offset their meal, which means basically donating an amount equivalent to their order to the meat offset fund.
  • Sometimes, randomly, when somebody clicks the "proceed with order" button and they have meat in their order, they are prompted with a pop up telling them "You were randomly selected for a free vegan meal! If you accept the offer, your X$ order will be cancelled and you will get a voucher of X$ that expires in an hour and can be used to order vegan food.

I think this app would come quite close to actually implementing a legitimate meat offsetting feature. Every time a meat eater takes the offer, they give up a meat meal and eat vegan instead.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region