A

AGB 🔸

3313 karmaJoined

Posts
4

Sorted by New
6
· · 1m read

Comments
285

For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they don't agree about where the money is getting burned..

So from where I stand I don't recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:

  1. Most of the money is directed by people who don't read or otherwise have a fairly low opinion of the forum.
  2. Posting on the forum is 'not for the faint of heart'.
  3. On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
  4. People are often aware that there's an 'other side' that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match. 

I don't expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future. 

If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.

Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.

(FWIW the dentist was very understanding, and apologetic that the anesthetic didn't do its job. I did not get the impression that my failure was unusual given that.)

When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced 'eliminate the pain!' response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrain's 'preference' is self-defeating, but I would make similar observations in some other cases, e.g. addiction.

If you don't weigh desires by attention or their effects on attention, I don't see how you can ground interpersonal utility comparisons at all

I don't quite see what you're driving at with this line of argument.

I can see how being able to firmly 'ground' things is a nice/helpful property for an theory of 'what is good?' to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.

Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?

The fact that 'what is good?' has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just aren't under the streetlight. 

But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which can't easily be measured or given a quantitative value. The third step is to presume that what can't be measured easily really isn't important. The fourth step is to say that what can't be easily measured really doesn't exist.

To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.

I'll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely. 

  1. ^

    SBF is the obvious example here, but really I've seen this so often in EA. Big fan of Warren Buffet's quote here:

    It’s good to learn from your mistakes. It’s better to learn from other people’s mistakes.

Hi Michael,

Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understand - from their point of view I'm potentially causing a lot of harm - but naturally causes procrastination. 

I still don't have a comprehensive response, but I think there are now a few things I can flag for where I'm diverging here. I found titotal's post helpful for establishing the starting point under hedonism:

For the intervention of cage free campaigns, using RP's moral weights, the intervention saves 1996 DALY's per thousand dollars, about a 100 times as effective as AMF. 

However, even before we get into moral uncertainty I think this still overstates the case:

  1. Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily. 

    How to deal with such uncertainty has been the topic of much debate, which I can't do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as 'Effective Giving Advocacy' and have looked into their forward-looking marginal multiplier a fair bit; I think it's about 10x. Joel Tan looked and concluded 13x. I've checked with others who have looked at GWWC in detail; they're also around there. I've also seen 5x-20x claims for things like lead elimination advocacy, but I haven't looked into those claims in nearly as much detail.

    Overall I think that if you're comfortable donating to animal welfare interventions, comparing to AMF/Givewell 'Top Charities' is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.

    Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; there's a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loops - we can't ask the animals how they feel - and so I think is much more likely to end up causing harm on its own terms. I don't know how to quantify this issue, and it doesn't seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit. 

  2. As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I haven't seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
  3. Another issue is around 'pay it forward' or 'ripple' effects, where helping someone enables them to help others, which seem to only apply to humans not animals. I'm not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, it's hard to quantify and I'll set this aside as well.

After the two issues I am willing to quantify we're down to around 3.3x, and we're still assuming hedonism. 

On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So I'm not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isn't the case?

***

Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I don't see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake / humans are 'worth more' in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case. 

***

This brings us to the much thornier territory of moral uncertainty. While continuing to observe that I'm out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:

I find experience machine thought experiments, and people's lack of enthusiasm for them, much more compelling than 'tortured Tim' thought experiements for trying to get a handle on how much of what matters is pleasure/suffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so it's hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to "I'm maxing out pleasure but losing everything that 'actually matters'". 

RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what 'actually matters'. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when you're chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.

I've been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these 'secondary' effects - interference with non-hedonic goods - than from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger 'secondary' hits. 

I also just felt like the Tortured Tim thought experiment didn't 'land' even on its own terms for me, similar to the sentiments expressed in this comment and this comment.

Ah, gotcha, I guess that works. No, I don't have anything I would consider strong evidence, I just know it's come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well. 

they should definitely post these and potentially redirect a great deal of altruistic funding towards global health

FWIW this seems wrong, not least because as was correctly pointed out many times there just isn't a lot of money in the AW space. I'm pretty sure GHD has far better places to fundraise from. 

To the extent I have spoken to people (not Jeff, and not that much) about why they don't engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses. 

I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger.

I'm confused how this works, could you elaborate? 

My usual causal chain linking these would be 'argument is weak' -> '~nobody believes it' -> 'nobody posts it'.

The middle step fails here. Do you have something else in mind? 

FWIW, I thought these two comments were reasonable guesses at what may be going on here.

First, want to flag that what I said was at the post level and then defined stronger as:

the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person

You said:

I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week

So I can give examples of what I was referring to, but to be clear we're talking somewhat at cross purposes here:

  • I would not expect you to consider them strong.
    • You are not alone here of course, and I suspect this fact also helps to answer Nathan's confusion for why nobody wrote them up.
      • Even my post, which has been decently-received all things considered, I don't consider an actual good use of time, I more did it in order to sleep better at night.
  • They often were mentioned at comment-level.

With that in mind, I would say that the most common argument I hear from longtime EAs is variants of 'animals don't count at all'. Sometimes it's framed as 'almost certainly aren't sentient' or 'count as ~nothing compared to a child's life'. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and it's one I hear a decent amount from EAs closer to me as well. 

If you've discussed this a ton I assume you have heard this too, and just aren't thinking of the things people say here as strong arguments? Which is fine and all, I'm not trying to argue from authority, at least not at this time. My intended observation was 'lots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate week'. 

I think that observation holds, though if you still disagree I'm curious why. 

Thanks for this post, I was also struggling with how scattered the numbers seemed to be despite many shared assumptions. One thing I would add:

Another thing I want to emphasise: this is an estimate of past performance of the entire animal rights movement. It is not an estimate of the future cost effectiveness of campaigns done by EA in particular. They are not accounting for tractableness, neglectedness, etc of future donations....

In the RP report, they accounted for this probable drop in effectiveness by dropping the effectiveness by a range of 20%-60%. This number is not backed up by any source: it appears to be a guesstimate based on the pros and cons listed by Saulius. Hence there is signifcant room for disagreement here. 

If we take Sallius’s estimate of 42 chickens affected per dollar, and discount it by 40%, we get a median of 42*0.6 = 25.2 chickens affected per dollar. 

Last year Open Phil stated that their forward-looking estimate was 5x lower than Saulius' backward-looking estimate. This is the type of 'narrow' question I default defer to Open Phil on, and so I would drop your final figures by a factor of 3, going from 60% of Saulius to 20% of Saulius.

I agree-voted this. This post was much more 'This argument in favour of X doesn't work[1]' rather than 'X is wrong', and I wouldn't want anyone to think otherwise. 

  1. ^

    Or more precisely, doesn't work without more background assumptions.

Yeah I think there's something to this, and I did redraft this particular point a few times as I was writing it for reasons in this vicinity. I was reluctant to remove it entirely, but it was close and I won't be surprised if I feel like it was the wrong call in hindsight. It's the type of thing I expect I would have found a kinder framing for given more time.

Having failed to find a kinder framing, one reason I went ahead anyway is that I mostly expect the other post-level pro-GH people to feel similarly. 

I’ll leave this thread here, except to clarify that what you say I ‘seem to think’ is a far stronger claim than I intended to make or in fact believe.

Load more