Hide table of contents

[This is a cross-post from my blog, which you can find here]

The EA space is certainly a unique intersection of people from many walks of life, each with their own priorities and goals. However, an interesting contradiction arose in a recent conversation I had over dinner with friends. As I state in the conclusion, this may be either a criticism of longtermism or of vegetarian/veganism, depending on your perspective.

If you are someone who subscribes to Longtermism, (the idea that future people hold equal moral weight as compared to present people, and that we should adjust our actions to be accordingly biased to creating future growth.) Then, it seems to me that it would actually be non-optimal of you not to eat the most convenient/delicious/nutritious meal that you can find, whenever possible, and without much regard for animal welfare.

ANIMAL WELFARE VS HUMAN PREFERENCES

The argument goes like this: Whatever people may do to make future people better off, they will probably do more of it/do it better if they are more satisfied/happier. There are some studies on this (link, link, link), that suggest it might be a difference somewhere between 10-20%. Anecdotally, just take a look at the sometimes ludicrous lengths that tech companies go to please their employees. This is not altruism, it’s just good business. 

So okay, great. We agree that happy people are more productive. Now let’s consider this within the domain of diet choice. 

Is veganism/vegetarianism a choice that makes people happy? Maybe for some people, but usually not in a vacuum. If you truly do enjoy eating vegetarian/vegan more than a meat based diet on the basis of taste and convenience alone, more power to you. However, it seems that around the world, there is a strong revealed preference for people to eat more meat as it becomes more available. We can tell this by looking at the rate of meat consumption vs. GDP per capita.  

 

Many vegetarians/vegans do so for religious or moral reasons, but messages that our inherent preference to eat meat is morally bad can be harmful, because they don’t actually change how a meatless diet tastes, they simply add costs like guilt and disgust in an attempt to tip the scale in people’s choices, and do not make material changes in either diet.

Okay, so we’ve now established that there is a large cost to not eating meat for those whose inherent preference it is to do so, which are a large number of people. But the key idea here, and where the Longtermist perspective becomes important, is that this cost is compounding. 

Consider the following.

A researcher orders and eats a chicken sandwich for lunch everyday. He loves the chicken sandwich, it’s one of the best parts of his day, and he extracts many utils from this sandwich that allow him to exercise more willpower, and work 5% longer or harder each day. His research is completed and implemented 5% more quickly, and improves people’s lives around the world by .0001 utils on average. That’s an 800,000 util increase in well being across the world population. Some of these utils will go to other researchers, who will continue the cycle, laundering utils over and over again into untold riches and wealth for future generations.

Now consider that the 100 or so chickens that it takes to make the researchers 365 sandwiches were spared. Even if we weigh chicken utils as equal to human utils, and even if the total # of utils they experience is the same sum of 800,000, what do they do with them? A happy chicken doesn’t benefit humanity any more than a sad one, and a sad chicken doesn’t necessarily do psychic damage to humanity (although some may elect to do psychic damage to themselves on the chickens behalf). Benefits to animals are a one-time event. Therefore, the researcher’s compounding benefit will always win in the long run. 

From this example, we can tell that every util that falls into human hands is worth many, many, more utils experienced by birds in the bush. Animals may very well deserve to be included in the “moral circle”, but not to the point of excluding future humans! Framing the problem in this way reveals a discrepancy in people’s sensibilities. If we ought to be so concerned about future welfare, why obsess over present suffering, especially suffering that can be written off as easily as that of animals?

Okay, so the large cost to going vegetarian/vegan is compounding while the benefits to animals are not. In my mind, that nullifies pretty much the entire animal welfare argument, but there are still other costs to eating meat that we haven't covered, and which do compound, since they are costs to humans. 

COSTS TO HUMANS

Maybe it’s not about animal suffering, but rather that vegan/vegetarianism is better for the world. Even the impact of going vegetarian/vegan on the environment is not huge when compared to the opportunity cost of being more productive and experiencing fewer inconveniences. In Will MacAsckill’s book, What We Owe The Future, he lists the stat that going vegetarian averts 0.8 tonnes of CO2 per year. At Terrapass, a carbon offset of 1000 pounds, (about .5 tonnes) is available for as low as $7.49 a month. That’s about $13.22 a month to have the same impact on the environment as going vegetarian. I think most people would agree that the cost of a Netflix subscription is worth it to continue to eat meat. If Terrapass focused on efforts that accelerated green energy technology, I’m sure they could get the price even lower. (discussed in a previous post here). Therefore, if we assume the low end of the productivity effect range I gave above (10%) and you think your work creates more than $140 a month in positive externalities, you can and should go ahead and fulfill your preference.

 

Graph illustrating the power of funding innovation from the Founders Pledge Climate and Lifestyle Report

This leads me to health and nutrition claims. I think that nutrition is still a rather woo-woo area of science today, and that most diet studies do not show significant health results when compared to a control diet. Therefore, I rate the health claims on either side of the meat eating divide as a wash. So let’s assume you are equally healthy eating meats vs. plants. However, this does not necessitate that you are equally as productive. Eating meat may be equally healthy as eating veggies, but it is fulfilling your preferences that makes you happy. If the researcher from the example above didn’t like the taste of chicken, it’s likely that he would not have been able to leverage that into working harder for the day. 

So, to sum up, the longtermist viewpoint on whether or not you should eat meat essentially has nothing to do with animal welfare, as it is a one-time cost. When viewed on long time horizons, it is a balance between the negative externalities to the climate, and the positive externalities to work productivity which may benefit the future. Even if you elevate current animal experience to the same level as current human experience, humans are much better at carrying utility forward to future generations, while animals are just inefficient by comparison. I don’t believe that trading future humans for present animals is justifiable, and thus, if it is your preference to eat meat, I find it likely that you should continue to do so. 

As always, if you think differently, please feel free to dunk on me in the comments below, maybe this is more of a criticism of longtermism than vegetarian/veganism, depending on your perspective!
 

FOOTNOTE:

Many vegetarians/vegans might argue that continuing to consume meat perpetuates and normalizes the way in which animals suffer in factory farming, reframing the one time benefit to animal welfare as a slippery slope, but I think that this is not a reality that is likely to persist much longer. I think that the advent of lab grown meat and breeding dumber animals will allow us to continue our consumption at a lower unit cost of animal welfare. There is some pretty compelling data that we are quite close to seeing lab grown alternatives available in supermarkets soon with some reports estimating that 35% of all meat being cultured by 2040, and that demand for plant based meats may have already peaked. However, when people do adopt cultured meats, I think that it will largely be because they are more delicious, more consistent, and cheaper, and not because of an ethical appeal. I want to eat an A-5 Wagyu steak everyday, and I think it’ll be sooner rather than later that the easiest way to produce such a luxury will be via cultured meat. 

Q: Can't we just change our preferences? 

I don’t really think so, and it seems even less likely that we could change our preferences without imposing costs like guilt and disgust to tip the scale. If it is your taste preference to eat meat, then it seems unlikely that you can consciously decide to prefer veggies or meat alternatives, and this is probably why veg alternatives so often try to emulate the taste of meat. I’m certainly not that in control of my preferences, but if you can will your will, more power to you.

Another example:

If your grandfather had worked one extra year in his life, how much richer might you be today? Conversely, if he had spent one of his years finding the nearest vegan restaurant, deliberating over the ethics of the impossible burger vs. the un-burger, spending more money on these hard to find ethical alternatives, watching factory farm documentaries and feeling guilty about his animal nature, and reaping fewer utils from worse-tasting food, how much poorer do you think you might be?

 


 

-15

0
1

Reactions

0
1

More posts like this

Comments13


Sorted by Click to highlight new comments since:

A couple of thoughts:

  • This argument doesn't seem specific to longtermism. You could make the same case for short-term animal welfare. If you'll be slightly more effective at passing sweeping changes to mitigate the harms of factory farming if you eat a chicken sandwich every day, the expectation of doing so is highly net positive even if you only care about chickens in the near future.

  • This argument doesn't seem specific to veganism. You could make the same case for being a jerk in all manner of ways. If keying strangers' cars helped you relax and get insight into the alignment problem, then, the same reasoning might suggest you should do it.

This isn't to say the argument is wrong, but I find the implications very distasteful.

I agree! There seems to be a utility monster problem when weighing Longtermist stuff against moral good that has no compounding value. This is why I added the line about not being sure whether this should be weighed as a criticism against Longtermism or against veganism.

It sounds like you didn't understand Derek's comment. With his points in mind your response sounds like:

I agree! There seems to be a utility monster problem when weighing short-term chicken welfare against moral good that has no compounding value. This is why I added the line about not being sure whether this should be weighed as a criticism of caring about short-term chicken welfare or against stopping keying strangers' cars even if it helps you relax.

It seems to me like your argument proves too much, and Derek's comment helps reveal that, but it doesn't seem like this comment of yours acknowledges that despite the initial "I agree".

I'm with both of you in my discomfort, though I'm not sure I would consider this a reason to be suspicious of " longtermism" so much as "bullet-biting deontic strong longtermism", but I think there are lots of very counter intuitive implications of the latter anyway.

These feel like classic arguments. Such arguments hold some weight. But (in agreement with Derek Shiller) I think they are more an argument against a very strong form of longtermism where you are actively sacrificing everything else we care about in order to make the future better. Such lines of reasoning seem bad-in-general because errors in reasoning magnify tremendously. Instead, we should mix lots of different worldviews into our moral strategy, preferring actions which are robustly good across many worldviews.

I also have noticed (in the process of eating less animal products myself) that there's a second-order effect in how my brain thinks about ethics and altruism when it's not doing cognitive dissonance three times a day. I feel like a more pure-hearted person, able to think more clearly about what to do in the world, when not eating animal products. So far, that effect has been quite positive and has swamped the first-order annoyance of having to change my diet. (This works for recycling as well -- I decided that even though recycling is largely useless for the environment, it helps my sense of being a moral person to recycle and thus I keep doing it. Contrapositively, buying offsets doesn't work for me very well.)

The second paragraph really hits on the nose how I feel, without having ever been able to put it into words - regarding both eating less animal products and recycling.

And offsets too FWIW. Something about avoiding doing something bad makes me feel like a good person, in a way that doing something bad and then making up for it by doing something good just doesn't. 

I'm not sure if the two events are just too far apart in time, or if my EA/rational side just kicks in and I can't feel good about donating to offset a particular thing instead of to the most effective thing. Or maybe I just can't emotionally get over my sense of "can't undo the bad thing".

Personally, I trust longtermists who don’t make any diet change for animals less with the future, although veganism seems farther than necessary. I think people should be sensitive to ongoing moral catastrophes like factory farming, and an astronomical number of future moral patients (including artificial sentience) could be vulnerable and have limited agency like today's farmed animals. In expectation, I think changing your diet increases the concern you have for future moral patients with limited agency by increasing their salience and reducing cognitive dissonance, allowing you to weigh their interests more fairly. So, basically virtue consequentialist reasons.

Veganism in the longtermist community also increases the salience of future moral patients with limited agency.

Agency adds overhead and can become a barrier for optimizing a mind for value or disvalue since they can decide to do otherwise and even undermine a powerful agent's efforts generally, so it's pretty plausible the most efficient instantiations of value and disvalue will have (and be designed with) limited agency and that they will dominate future value/disvalue.

I just want to give some more comments / counterpoints:

Firstly, I think this post may be somewhat exaggerating the actual magnitude by which these diets differ in taste pleasure (on average). My intuition would be that it's actually quite small (at least after an initial adjustment period) and relatively insignificant compared to other changes people could potentially implement to make themselves 'happier' on an everyday level.
(Note that this is ignoring considerations of convenience since they aren't mentioned much in your argument but I'd be happy to comment on that as well.)

Also, I don't find myself convinced that one's preferences can't change in this case. This is related to the adjustment period I mentioned above. From personal and anecdotal experience I think many things tend to 'grow on you' over time, including foods, and this effect seems much more important than 'consciously deciding' to change your preferences. Indeed, it does seem pretty implausible that the latter would work in isolation, but I think other factors (like adjustment) are relevant here.

Yeah I don't disagree that preferences cannot change, but I hold that as true with the important caveat that they cannot change without costs, which I think you acknowledge in your sentence about an initial adjustment period. As for the magnitude of the difference, I don't really think it matters: it could be 5%, it could be .1%, it could be .01%. What's important is that it will be forever laundered into more utility by other humans, while this is not true of utils to animal welfare. So if you accept that meat is a preference (even small) and that fufillling preferences makes people work even a little bit harder/better, then eventually, it will always outgrow the animal welfare utils in the long-term. Like I said, I dont know if this conflict should downgrade longtermism or veganism, but I just thought it needed pointing out, as it confused me. 

Ah yes, I see now that your argument rests on less premises than I thought.

Firstly I would echo what Devin said above about this being a flaw of "bullet-biting strong deontic longtermism". One could seemingly justify basically any action that marginally increases productivity on those grounds (even for a very temporary period of time). That being said, I think there are probably significant positive flow-on effects from veganism too. For one thing, it may increase societal moral progress in expectation. Similarly, there is evidence to suggest that at an individual level it reduces cognitive biases related to speciesism and increases one’s moral consideration for non-humans (as Michael noted above). Compounding benefits from effects like these may well outweigh those of productivity increases.

Also, it’s very unclear to me that being vegan would actually reliably decrease the average person’s productivity, even if it is initially a revealed preference. Obviously this is ultimately an empirical question. However one could make a priori arguments in the other direction too. E.g. perhaps by reducing cognitive dissonance, people tend to feel more happy, and therefore are more productive. Or perhaps caring about a cause like animal welfare increases motivation and feelings of purpose marginally throughout one’s life. This is not to say I agree with any of those speculations, but just to point out that they could be made.

Finally, I think there are probably sound deontological reasons to be vegan, which are important under moral uncertainty, but I won’t get into that too much in this comment. Naturally the same would apply for a lot of the other counterintuitive implications that this form of longtermism would have.

Interesting post. Just want to chime in with a comment that I think you’re overconfident in cell-cultured meat (though I don’t blame you—there’s been a lot of boostermism). It’s possible it won’t reach price parity and be a real contender in the marketplace. We have to try, and time will tell.

Demand for plant based meat having peaked is evidence against meat consumption declining, not in favour of it. And I don't think any serious unbiased analysts have suggested that lab grown meat would exceed 10% of global supply by 2040, if it ever becomes viable. See https://forum.effectivealtruism.org/posts/2b9HCjTiFnWM8jkRM/forecasts-estimate-limited-cultured-meat-production-through for a much more typical and pessimistic take.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin