Hide table of contents

Resource: Better for Animals: Evidence-Based Insights for Effective Animal Advocacy

Animal advocates use a wide variety of approaches to help animals—from running corporate campaigns to get chickens out of cages, to researching wild animal welfare science, to influencing lawmakers to support plant-based policies. But which of these approaches are the most promising, and how can they be made more effective? Evaluating and comparing them is a monumental challenge—especially as our field has less empirical research available to guide decisions than other cause areas, such as global health and development.1

However, the animal advocacy evidence base is growing: On average, we add more than 100 articles to our Research Library each month. This is great news; however, it brings its own challenges. While we have always consulted existing research to inform our grantmaking and charity recommendation decisions, the increasing volume and complexity of research called for ACE to adopt a more systematic and dynamic approach to synthesizing results from empirical studies and updating our thinking about intervention effectiveness.

The challenge isn’t unique to us: Advocates, funders, and researchers navigating this expanding and often contradictory “evidence maze” can easily become overwhelmed. Research from Faunalytics has highlighted this very issue, finding that advocates often need more accessible syntheses to make informed decisions.2

In February 2024, ACE launched a project aiming to address this problem. We started out with the primary goal of sharpening our own grantmaking and charity recommendation decisions, while also addressing what we saw as a bottleneck for the wider movement. We wanted to create a thorough, dynamic overview of the evidence for the almost 30 intervention types in our Menu of Interventions—whether they have been shown to work, what their risks are, and under what conditions we expect them to be more or less effective.

We developed this resource internally and are now excited to share Better for Animals: Evidence-Based Insights for Effective Animal Advocacy. This resource is a living document. We will update it several times a year with new evidence, and we hope it will evolve with feedback from you, our community. At ACE, we now regularly consult these evidence reviews when evaluating charities or grant applications. Understanding the state of the evidence for the interventions a charity uses helps us assess the strength of their theory of change, gauge whether they follow best practice in how they implement the intervention, and ask them the most meaningful questions about their work.

To help make this detailed information more accessible to a wide range of audiences, starting later in September we will launch a series of social media and blog posts spotlighting one intervention each month.

We hope that readers will use our new resource in several ways:

  • We hope that researchers will critique our conclusions, send us evidence we may have missed, and consider researching some of the biggest gaps in the evidence base.
  • We hope that advocates will offer their on-the-ground perspective on how these interventions work in practice, and use our findings to inform their strategy and tactics.
  • We hope that funders will find this a helpful resource on the state of the evidence for different advocacy approaches, to inform their prioritization.

This project was a huge effort and would not have been possible without the critical feedback and strategic input of countless volunteers, advocates, researchers, and funders. A huge thank you to everyone who contributed!

Below, we walk you through how this resource came to be, our research process, and the main limitations.

The Project

We knew we couldn’t develop this resource in a vacuum. We started by consulting other organizations doing similar work, in order to collaborate and avoid duplication, including Mercy For Animals, Faunalytics, and Rethink Priorities. These conversations confirmed the project would fill a unique and necessary gap, and complement other efforts in the movement.

We developed a detailed research protocol, adapting one developed at Faunalytics for our purposes. The protocol detailed our search strategy, guidelines for evaluating and synthesizing evidence, and the key research questions we wanted to answer for each intervention. After trialing the protocol on an initial set of topics, we shared early drafts with a range of external reviewers—funders, advocates, and researchers—and used their feedback and our experience of trialling the protocol to refine our process.

Using the refined protocol, our researchers, research fellow, and a group of amazing volunteers wrote evidence reviews on the remaining topics. These were typically reviewed by ACE’s Programs team. We also submitted a subset for external peer review, selecting the interventions most commonly used by the charities we evaluate for recommendation or grants. These peer reviewers included researchers and advocates with specialist expertise on those topics.

The Research Process

For each topic, our researchers began by scouring key sources, from academic databases like Google Scholar to the Faunalytics Research Library and research reports from groups within the movement. This created a longlist of potential articles for inclusion.

We then shortlisted the most relevant and rigorous studies. While our initial plan was to cap this at around 10 articles per intervention due to team capacity, this ended up varying greatly by intervention type. For some interventions, we reviewed nearly 50 articles to build a coherent picture. For others, a lack of direct research meant we had to rely on very few articles, theoretical arguments, and/or evidence from adjacent fields.

From there, we synthesized the evidence by evaluating, comparing, and combining the findings from all shortlisted articles to form a coherent overall picture. We focused this analysis on a set of key questions, starting with “Is it effective?”, where we define effectiveness in terms of reduced or avoided animal suffering. Next, we dug deeper to understand relevant context and risks. We believe it’s unhelpful to label most approaches as simply “good” or “bad;” nuance is critical. An intervention’s success almost always depends on the context: where and how it is implemented, who the target audience is, and what the specific ask is. We explored the evidence for conditions that might make an intervention more or less likely to succeed, and how it could potentially backfire and inadvertently harm animals or the movement.

Finally, we brought everything together into an overall assessment of how promising we think the intervention is. We also determined our level of confidence based on the quality, quantity, and agreement of sources available, and identified the high-priority research questions that, if answered, could change our minds or increase confidence in our verdict.

We now update the evidence reviews every few months with new research, most of which is identified through our monthly Research Digest, which collates new research relevant to farmed animal advocates every month.

Limitations

Our conclusions about interventions’ effectiveness are to be interpreted with caution for several reasons:

  • This is not a systematic review. Due to capacity constraints, we were unable to conduct a full and comprehensive literature review and instead used our best judgment to select studies for inclusion. Despite multiple rounds of internal and external feedback, it’s possible we missed crucial research that could change our overall assessment.
  • Publication bias. Academic journals are more likely to publish studies with positive or statistically significant results. This can skew the available evidence, potentially making interventions appear more effective than they are. Although we searched outside of classic academic publications, we didn’t have the capacity to search for unpublished data.
  • Focus on short-term effects. It is generally much harder to measure the long-term impact of interventions, so our conclusions may overrepresent short-term effects. We have, however, attempted to assess the evidence for both short- and long-term wherever possible.
  • Generalizability. Findings from one study in a specific country or with a particular demographic may not apply elsewhere, and interventions used in Europe and North America are overrepresented in the existing literature. We have tried to note these limitations where they are apparent and suggest replication in other geographical contexts.
  • Limited evidence base. For some interventions, we had to rely on lower-quality evidence (like case studies) or less relevant evidence from adjacent fields. Our confidence ratings reflect this uncertainty.
  • Hidden potential. Even for interventions we found to be less promising, there may be specific contexts in which they are highly effective that have not yet been researched. We therefore want our verdicts to be dynamic, to stay open to being wrong, and to change with new evidence.
  • Not fully comprehensive. Our Menu of Interventions captures the interventions the charities we evaluate for recommendation or assess as potential grantees use most commonly, but it doesn’t capture every approach that exists in the movement.

We’d love to continue receiving feedback. Because we don’t have time to moderate a flurry of comments, if you’d like to give feedback on the project as a whole, or a particular intervention, please email alina.salmen@animalcharityevaluators.org or max.taylor@animalcharityevaluators.org with your feedback, or to request comment access to the document.

Acknowledgments

We would like to extend our gratitude to:

Our volunteers

Jackie Bialo, Elena Braeu, Jan Gaida, and Sada Rice.

Our research fellow

Sam Mazzarella

For their feedback and advice

Alene Anello, Christopher Berry, Aaron Boddy, George Bridgwater, Chris Bryant, Vicky Cox, Alice Di Concetto, Rune-Christoffer Dragsdahl, Neil Dullaghan, Sueda Evirgen, Carolina Galvani, Martin Gould, Vasco Grilo, Thomas Hecquet, Emre Kaplan, Cailen Labarge, Chrys Liptrot, Jesse Marks, William McAuliffe, Caroline Mills, Gülbike Mirzaoğlu, PJ Nyman, Björn Ólafsson, Pete Paxton, Jacob Peacock, Kathrin Plaschnick, Andrea Polanco, Sean Rice, Aditya SK, Zoë Sigle, Saulius Šimčikas, Michael St Jules, Ben Stevenson, Andie Thompkins, and Prashanth Vishwanath.

  1. E.g., Hilton & Bansal (2023)
  2. Jones & Anderson (2024) 

30

1
0
1
1

Reactions

1
0
1
1

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: Animal Charity Evaluators has launched Better for Animals, a living resource synthesizing evidence on nearly 30 animal advocacy interventions, aiming to improve strategy and grantmaking while helping advocates, funders, and researchers navigate an expanding but fragmented evidence base; the resource is updated regularly, incorporates feedback, and highlights both strengths and limitations of current knowledge.

Key points:

  1. Better for Animals responds to the challenge of evaluating diverse advocacy strategies amid limited, fragmented, and sometimes contradictory evidence.
  2. ACE developed a structured research protocol, collaborated with peer organizations, and incorporated internal and external reviews to ensure rigor.
  3. The resource provides nuanced assessments of interventions, avoiding simple “good/bad” labels and emphasizing context, risks, and conditions for effectiveness.
  4. Reviews are updated several times per year to integrate new research, identified mainly through ACE’s monthly Research Digest.
  5. Major limitations include lack of full systematic reviews, publication bias, overrepresentation of short-term and Western studies, and reliance on lower-quality or adjacent evidence for some interventions.
  6. ACE invites feedback and hopes the resource will guide advocates’ strategies, inform funders’ priorities, and inspire researchers to fill critical evidence gaps.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities