I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack.
----------------------------------------
“One pump of honey?” the barista asked.
“Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”
Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong.
Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.
Source
Bentham Bulldog’s Case Against Honey
Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus:
P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs)
P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections)
P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans
P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products
C: Therefore, honey is the worst commonly consumed animal product and should be avoided
The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some hot takes on AI governance field-building strategy
(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some exercises for developing good judgement
I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.
I think this framing of the exercise might have been mentioned to me by Michael Aird.
This is a good tip! Hadn't thought of this.
An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.
Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?
In the meantime, I'm on the look out for good mental health podcasts in general.
This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them").
Maybe many episodes of Clearer Thinking could also help fill this role?
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.
Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!
As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).
I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).
However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.
Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.
One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.
I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.
I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]
The short answer is:
[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow.
Thanks, Howie! Sent you an email.
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some key uncertainties in AI governance field-building
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
How best to find/upskill more people to do policy development work?
What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?
Reasons why this seems important to get clarity on:
To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
Agreed, thanks for the pushback!
(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some heuristics for prioritising between talent pipeline interventions
Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Misc things it seems useful to do/find out
(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Laundry list of talent pipeline interventions
Note to self: more detailed but less structured version of these notes here.