I feel pretty confused about whether I, as an effective altruist, should be vegetarian/vegan (henceforth abbreviated veg*n). I don’t think I’ve seen anyone explicitly talk about the arguments which feel most compelling to me, so I thought I’d do that here, in a low-effort way.
I think that factory farming is one of the worst ongoing moral atrocities. But most of the arguments I’ve heard for veg*nism, which I found compelling a few years ago, hinge on the effects that my personal consumption would have on decreasing factory farming (and sometimes on climate change). I now don’t find this line of thinking persuasive - my personal consumption decisions just have such a tiny effect compared to my career/donation decisions that it feels like I shouldn’t pay much attention to their direct consequences (beyond possibly donating to offset them).
But there are three other arguments which seem more compelling. First is a deontological argument: if you think something is a moral atrocity, you shouldn’t participate in it, even if you offset the effects of your contribution. In general, my utilitarian intuitions are much stronger than my deontological ones, but I do think that following deontological principles is often a very good heuristic for behaving morally. The underlying reason is that humans by default think more naturally in terms of black-and-white categories than shades of grey. As Yudkowsky writes:
Any rule that's not labeled "absolute, no exceptions" lacks weight in people's minds. So you have to perform that the "Don't kill" commandment is absolute and exceptionless (even though it totally isn't), because that's what it takes to get people to even hesitate. To stay their hands at least until the weight of duty is crushing them down. A rule that isn't even absolute? People just disregard that whenever.
Without strong rules in place it’s easy to reason our way into all sorts of behaviour. In particular, it’s easy to underestimate the actual level of harm that certain actions cause - e.g. thinking of the direct effects of eating meat but ignoring the effects of normalising eating meat, or normalising “not making personal sacrifices on the basis of moral arguments”, or things like that. And so implementing rules like “never participate in moral atrocities” sends a much more compelling signal than “only participate in moral atrocities when you think that’s net-positive”. That signal helps set an example for people around you - which seems particularly important if you spend time with people who are or will become influential. But it also strengthens your own self-identity as someone who prioritises the world going well.
Then there’s a community-level argument about what we want EA to look like. Norms about veg*nism within the community help build a high-trust environment (since veg*nism is a costly signal), and increase internal cohesion, especially between different cause areas. At the very least, these arguments justify not serving animal products at EA conferences.
Lastly, there’s an argument about how I (and the EA community) are seen by wider society. Will MacAskill sometimes uses the phrase “moral entrepreneurs”, which I think gestures in the right direction: we want to be ahead of the curve, identifying and building on important trends in advance. I expect that veg*nism will become much more mainstream than it currently is; insofar as EA is a disproportionately veg*n community, this will likely bolster our moral authority.
I think there are a few arguments cutting the other way, though. I think one key concern is that these arguments are kinda post-hoc. It’s not necessarily that they’re wrong, it’s more like: I originally privileged the hypothesis that veg*nism is a good idea based on arguments about personal impact which I now don’t buy. And so, now that I’m thinking more about it, I’ve found a bunch of arguments which support it - but I suspect I could construct similarly compelling arguments for the beneficial consequences of a dozen other personal life choices (related to climate change, social justice, capitalism, having children, prison reform, migration reform, drug reform, etc). In other words: maybe the world is large enough that we have to set a high threshold for deontological arguments, in order to avoid being swamped by moral commitments.
Secondly, on a community level, EA is the one group that is most focused on doing really large amounts of good. And so actually doing cost-benefit analyses to figure out that most personal consumption decisions aren’t worth worrying about seems like the type of thing we want to reinforce in our community. Perhaps what’s most important to protect is this laser-focus on doing the most good without trying to optimise too hard for the approval of the rest of society - because that's how we can keep our edge, and avoid dissolving into mainstream thinking.
Thirdly, the question of whether going veg*n strengthens your altruistic motivations is an empirical one which I feel pretty uncertain about. There may well be a moral licensing effect where veg*ns feel (disproportionately) like they’ve done their fair share of altruistic action; or maybe parts of you will become resentful about these constraints. This probably varies a lot for different people.
Fourthly, I am kinda worried about health effects, especially on short-to-medium-term energy levels. I think it’s the type of thing which could probably be sorted out after a bit of experimentation - but again, from my current perspective, the choice to dedicate that experimentation to maintaining my health instead of, say, becoming more productive feels like a decision I’d only make if I were privileging the intervention of veg*nism over other things I could spend my time and effort on.
I don’t really have any particular conclusion to this post; I wrote it mainly to cover a range of arguments that people might not have seen before, and also to try and give a demonstration of the type of reasoning I want to encourage in EA. (A quick search also turns up a post by Jess Whittlestone covering similar considerations.) If I had to give a recommendation, I think probably the dominant factor is how your motivational structure works, in particular whether you’ll interpret the additional moral constraint more as a positive reinforcement of your identity as an altruist, or more as something which drains or stresses you. (Note though that, since people systematically overestimate how altruistic they are, I expect that most people will underrate the value of the former. On the other hand, effective altruists are one of the populations most strongly selected for underrating the importance of avoiding the latter.)
They seem relevant because willpower and attention budgets are limited, and our altruism-directed activities (and habits, etc.) draw from those budgets.
I concede that this argument goes through probabilistically, but I feel like people overestimate its effect.
Almost none of the non-vegetarian EAs would want to lock in animal suffering for the long-term future, so the argument that personal veg*ism makes a difference on s-risks is a bit conjunctive. It seems to rely on the hidden premise that humans will attain control over the future, but EA values will die out or only have a negligible effect. That's possible, but it doesn't rank among the scenarios I'd consider likely.
I think the trajectory of civilization will gravitate toward one of two attractors: (1) People's "values" will become less and less relevant as Moloch dynamics accelerate (2) People's "values" will be more in control than ever before
If (1) happens, it doesn't matter in the long run what people value today.
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it's technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
You may say "but why assume compromise instead of war or value-assimilation where minority values die out?"
Okay, those are possibilities. But like I said, it makes the claim more conjunctive.
Also, there are some reasons to expect altruistic values to outcompete self-oriented ones. (Note that this blog post was written before Open Phil, before FTX, etc.) (Relatedly, we can see that, outside of EA, most people don't seem to care or recognize how difficult it is for humans to attain control over the long-term future. )
Maybe we live in an unlucky world where some kind of AI-aided stable totalitarianism is easy to bring about (in the sense that it doesn't require unusual degrees of organizational competence or individual rationality, but people can"stumble" into a series technological inventions that opens the door to it). Still, in that world, there are again some non-obvious steps from "slightly increasing the degree the average Westerner cares about nonhuman animals" to "preventing AI-aided dictatorship with bad values." Spreading concern for nonhuman suffering likely has a positive effect here, but it looks unlikely to be very important compared to other interventions. Conditioning on that totalitarian lock-in scenario, it seems more directly useful to promote norms around personal integrity to prevent people with dictatorial tendencies from attaining positions of influence/power or working on AI governance.
I think there are s-risks we can tractably address, but I see the biggest risks around failure modes of transformative AI (technical problems rather than problems with people's values).
Among interventions around moral circle expansion, I'm most optimistic about addressing risks of polarization – preventing that concern for the whole cause area becomes "something associated with the out-group," something that people look down on for various reasons. (For instance, I didn't like this presentation.) In my ideal scenario, all the non-veg*an EAs would often put in a good word for the intentions behind vegetarianism or veganism and emphasize agreement with the view that sentient minds deserve our care. (I largely see this happening already.)
(Arguably, personal veganism or vegetarianism are great ways to prevent concern for nonhumans from becoming "something associated with the out-group." [Esp. if the people who go veg don't promote their diets as an ideology in an off-putting fashion – otherwise it can backfire.] )