I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack.
----------------------------------------
“One pump of honey?” the barista asked.
“Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”
Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong.
Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.
Source
Bentham Bulldog’s Case Against Honey
Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus:
P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs)
P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections)
P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans
P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products
C: Therefore, honey is the worst commonly consumed animal product and should be avoided
The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
Frankly, I'm unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research's history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don't make the same kind of mistakes ourselves. It's usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization's operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.