[Status: latest entry in a longrunning series]

My last post on truthseeking in EA vegan advocacy got a lot of comments, but there’s one in particular I want to highlight as highly epistemically cooperative. I have two motivations for this:

  • having just spotlighted some of the most epistemically uncooperative parts of a movement, it feels just to highlight good ones
  • I think some people will find it surprising that I call this comment highly truthseeking and epistemically cooperative, which makes it useful for clarifying how I use those words. 

In a tangential comment thread, I asked Tristan Williams why he thought veganism was more emotionally sustainable than reducitarianism. He said:

Yeah sure. I would need a full post to explain myself, but basically I think that what seems to be really important when going vegan is standing in a certain sort of loving relationship to animals, one that isn’t grounded in utility but instead a strong (but basic) appreciation and valuing of the other. But let me step back for a minute

I guess the first time I thought about this was with my university EA group. We had a couple of hardcore utilitarians, and one of them brought up an interesting idea one night. He was a vegan, but he’d been offered some mac and cheese, and in similar thinking to above (that dairy generally involves less suffering than eggs or chicken for ex) he wondered if it might actually be better to take the mac and donate the money he would have spent to an animal welfare org. And when he roughed up the math, sure enough, taking the mac and donating was somewhat significantly the better option.  

But he didn’t do it, nor do I think he changed how he acted in the future. Why? I think it’s really hard to draw a line in the sand that isn’t veganism that stays stable over time. For those who’ve reverted, I’ve seen time and again a slow path back, one where it starts with the less bad items, cheese is quite frequent, and then naturally over time one thing after another is added to the point that most wind up in some sort of reduceitarian state where they’re maybe 80% back to normal (I also want to note here, I’m so glad for any change, and I cast no stones at anyone trying their best to change). And I guess maybe at some point it stops being a moral thing, or becomes some really watered down moral thing like how much people consider the environment when booking a plane ticket. 

I don’t know if this helps make it clear, but it’s like how most people feel about harm to younger kids. When it comes to just about any serious harm to younger kids, people are generally against it, like super against it, a feeling of deep caring that to me seems to be one of the strongest sentiments shared by humans universally. People will give you some reasons for this i.e. “they are helpless and we are in a position of responsibility to help them” but really it seems to ground pretty quickly in a sentiment of “it’s just bad”. 

To have this sort of love, this commitment to preventing suffering, with animals to me means pretty much just drawing the line at sentient beings and trying to cultivate a basic sense that they matter and that “it’s just bad” to eat them. Sure, I’m not sure what to do about insects, and wild animal welfare is tricky, so it’s not nearly as easy as I’m making it seem. And it’s not that I don’t want to have any idea of some of the numbers and research behind it all, I know I need to stay up to date on debates on sentience, and I know that I reference relative measures of harm often when I’m trying to guide non-veg people away from the worst harms. But what I’d love to see one day is a posturing towards eating animals like our posturing towards child abuse, a very basic, loving expression that in some sense refuses the debate on what’s better or worse and just casts it all out as beyond the pale. 

And to try to return to earlier, I guess I see taking this sort of position as likely to extend people’s time spent doing veg-related diets, and I think it’s just a lot trickier to have this sort of relationship when you are doing some sort of utilitarian calculus of what is and isn’t above the bar for you (again, much love to these people, something is always so much better than nothing). This is largely just a theory, I don’t have much to back it up, and it would seem to explain some cases of reversion I’ve seen but certainly not all, and I also feel like this is a bit sloppy because I’d really need a post to get at this hard to describe feeling I have. But hopefully this helps explain the viewpoint a bit better, happy to answer any questions 🙂


It’s true that this comment doesn’t use citations or really many objective facts. But what it does have is: 

  • A clear description of what the author believes 
  • Clear identifiers of the author’s cruxes for those beliefs
  • It doesn’t spell out every possible argument but does leave hooks, so if I’m confused it’s easy to ask clarifying questions
  • Disclaimers against common potential misinterpretations. 
  • Forthright description of its own limits
  • Proper hedging and sourcing on the factual claims it does make

This is one form of peak epistemic cooperation. Obviously it’s not the only form, sometimes I want facts with citations and such, but usually only after philosophical issues like this one have been resolved. Sometimes peak truthseeking looks like sincerely sharing your beliefs in ways that invite other people to understand them, which is different than justifying them. And I’d like to see more of that, everywhere.

PS. I know I said the next post would be talking about epistemics in the broader effective altruism community. Even as I wrote that sentence I thought “Are you sure? That’s been your next post for three or four posts now, writing this feels risky”, and I thought “well I really want the next post out before EAG Boston and that doesn’t leave time for any more diversions, we’re already halfway done and caveating “next” would be such a distraction…”. Unsurprisingly I realized the post was less than halfway done and I can’t get the best version done in time for EAG Boston, at which point I might as well write it at a leisurely pace

PPS. Tristan saw a draft of this post before publishing and had some power to veto or edit it. Normally I’d worry that doing so would introduce some bias, but given the circumstances it felt like the best option. I don’t think anyone can accuse me of being unwilling to criticize EA vegan advocacy epistemics, and I was worried that hearing “hey I want to quote your pro-veganism comment in full in a post, don’t worry it will be complimentary, no I can’t show you the post you might bias it” would be stressful.





More posts like this

Sorted by Click to highlight new comments since:

Executive summary: The post discusses how to have productive disagreements on moral philosophy, using an example conversation about veganism vs. reducitarianism.

Key points:

  1. The comment thread demonstrates epistemic cooperation by clearly laying out beliefs, identifying cruxes, inviting clarifying questions, hedging factual claims, and acknowledging limits.
  2. Sharing beliefs sincerely can be more constructive than justification when resolving philosophical disagreements.
  3. The author plans to write about epistemics in EA more broadly but needed more time before an upcoming conference.
  4. The example shows loving animals, not just utility, may help sustain veganism over reducitarianism.
  5. Drawing a non-utilitarian ethical line against all animal products could increase commitment like child abuse taboos.
  6. Utilitarian line-drawing on animal welfare tends to erode over time in the author's experience.


This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities