60 karmaJoined


Hi Jc,

1- Yes, our criteria are different from GiveWell’s. As John alluded to in his original post, our work is quite different from GiveWell’s in a number of ways. For one thing, there is generally much less evidence available about the cost-effectiveness of animal advocacy interventions than about the cost-effectiveness of direct health interventions. As a result, our models of average cost-effectiveness are much less certain than GiveWell’s, which is one reason why we rely more heavily on other indicators of marginal cost-effectiveness. It’s possible that GiveWell could also benefit from considering some of the other criteria we consider, but I’m not enough of an expert on their work to be comfortable drawing that conclusion.

2- We look for charities that emphasize effectively reducing suffering in their mission statement so that we can be confident that their future activities will still align with that goal. Suppose a charity does outstanding work influencing diet change/meat reduction, but they do it with the goal of improving human health. We would be concerned that such a charity could dramatically shift their activities if something caused their mission to be less aligned with ours (for instance, if new research suggested that meat is good for human health). This concern wouldn’t necessarily prevent us from recommending the charity, but it would factor into our decision.

3- As above, this is a concern that would factor into our decision but it wouldn’t necessarily prevent us from recommending a charity.

Best, Toni

Hi Avi,

Thanks for your comment!

I think you’re right that some corporations do name organizations in their press releases, and it seems more likely that groups will be named if they are using a more collaborative approach. For what it's worth, in the paragraph you quoted, I now think that I anchored too heavily on my impression that groups such as THL, Mercy For Animals, and Animal Equality are quite rarely (if ever) named in the news or press releases associated with the welfare policy statements, or in the welfare policy statements themselves. As the majority of the organizations we evaluate usually use a less collaborative approach, I think the paragraph you quote will usually hold for the groups that we evaluate.

Still, even in those cases, I think that you’re also right and there should often still be some indirect evidence available from the timeline. That is, evidence of an organization campaigning at t1 and then, usually a short time later, at t2 evidence of a corporation making a commitment to the related welfare standards. For particularly important commitments we do look at this evidence but for the majority of commitments we don’t.

I think that your comment helps provide some important nuance to this discussion and I have left a link to this comment in the piece itself. Thank you again for the comment!

Best, Toni

Hi John!

1- Sure, happy to discuss this further. In the example we gave in footnote 3, we only used the proportional expenditure (PE) to calculate the weighting of each program’s “animal years averted” (AYA) estimate (i.e., weighting for AYA_1 = PE_1/Sum(PE_modelled)). So this gives a weighting that we apply to each AYA estimate, and is independent from the AYA estimate itself. Stopping here is not ideal, however it is not as straightforward to use a similar method for the AYA estimates, due to their distributions.

Including the mean values of the AYA estimates without the rest of the distributions introduces some inconsistencies that make this approach of questionable use. If you consider example 1 in this model, we have two calculations for total AYA. They would be identical if it weren’t for for the distribution of the third AYA. The impacts of the third AYAs would have the same result using your method of calculation, however they clearly impact the model differently (with 3a having a much larger impact on the overall result). In example 2, we have the issue of the mean being very small for one AYA. While the two distributions are of even size and have the same expenditure weighting, an estimate using the mean would attribute 99% of the impact to program 2.

A different way of considering the impact of each part of the model is not to consider the proportional magnitude of each program but to use a sensitivity analysis (Guesstimate has one built in). This tests which parts of the model would have the biggest impact on the final result, should they be adjusted. Running this for both models indicates that the THL model is most sensitive to corporate outreach, while the Animal Equality model fluctuates between corporate and grassroots outreach, depending on how guesstimate populates the model.

2- That’s fair! I agree that we did not sufficiently explain all of the evidence we used in our CEEs, and I agree that our old intervention reports were not of our current standard. You did not state explicitly that the evidence for supporting THL and Animal Equality comes only from their CEEs. However, you seemed to conclude that our reviews provide only weak evidence for supporting each charity simply because our CEEs are weak evidence. My point is just that we provide a lot of other evidence, as well.

3- Agreed—we should have mentioned this! We are trying to do better this year, and we appreciate your insights as our Criterion 3 consultant : )

Best, Toni

Hi Adom,

Thanks for your post, and no worries about asking questions we’ve answered elsewhere; we have a lot of research on our website, so we don’t expect anyone to know about all of it!

When I said that we consider each criterion to be an indication of a charity's marginal cost-effectiveness “independently” of the charity's average cost-effectiveness, I meant that—regardless of whether the charity has a high average cost-effectiveness or not—we still consider our six other criteria to be indications of marginal cost-effectiveness. There’s no one or two (or three, or four…) criteria that we think are perfect indications of marginal cost-effectiveness, though we think that all seven of them together are a very good indication. We discuss this a bit in our page on cost-effectiveness estimates, here: https://animalcharityevaluators.org/research/methodology/our-use-of-cost-effectiveness-estimates/

I won’t write more about this right now because we actually have a forthcoming blog post about how we weigh our criteria against each other to make our recommendation decisions. It’s being edited now and then we’ll likely seek external feedback before publishing, so I’d expect it in a month or so.

“It seems to me that a compellingly positive CEE, primary evidence or no, is nonetheless a necessary component in the belief that an organization will improve welfare, particularly if one has a pessimistic prior.”

We think it’s totally possible to make well-reasoned, evidence-based decisions about how to help animals, even in the absence of quantitative CEEs. After all, we don’t even publish quantitative CEEs for some charities that we review (especially if they are working towards long-term or difficult-to-measure outcomes). Take The Good Food Institute, for example. They are one of our Top Charities, but we have not published a quantitative CEE for them. It would be very difficult for us to quantitatively estimate the good they have done so far, since they are working to change the food system in a way that could take years or even decades. Still, we think they have excellent leadership, strong strategy, a healthy culture, we think their programs are likely to have a high long-term impact, and so on. We explain why in their review, and we think we’ve provided a compelling case for donating to them based on their marginal cost-effectiveness.

Regarding your question about “material explicating and justifying [ACE’s] understanding of this systemic/hard-to-quantify value,” we explain some of our thinking about long-term outcomes on the page about our cost-effectiveness estimates, linked above. If you’re asking for explanations of our assessment of the long-term value of particular charities or interventions, that would be in each charity review (mostly discussed in the “high-impact” section with the theories of change) and in our specific intervention reports. For instance, our protest report discusses the importance of movement building.

Hope that helps to answer some of your questions, and watch our blog for the post on our weighing of each criterion!

Best, Toni

Hi Jamie,

Thanks for those thoughts. I agree that there’s room for more depth in the literature review portion of our intervention reports. We’ve prioritized breadth over depth in our intervention research so far. That’s because there’s usually no existing survey of the literature on a given intervention, and beginning with a survey helps us identify the areas that we’d like to explore more in depth. (We usually identify “questions for further research” at the end of our reports.) I agree that a review of the literature on social movement impact theory would likely be very useful for the movement. I’m not sure whether ACE is the best-positioned group to do that kind of research, but we can certainly consider it!

Regarding the sources of the figures in our CEEs, I agree that this is an area where we can improve. I do think Guesstimate can be a little hard to read, and that might be part of it, but there are also some places where our 2017 CEEs did not include enough information. We are being more careful about this in 2018, and are publishing a separate “CEE metric library” that will explain the figures that crop up in every CEE.

Yes, we’ve definitely noticed that people naturally gravitate towards our CEEs : ) That corporate outreach report will be archived, and we are focusing on improving our research every year.

Best, Toni

Ah, good to know. Thanks!

Hi, I’m ACE’s new research director. Help give me karma to post on the forum!