Hide table of contents

This post summarizes the main findings of a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation.

This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved.

What is a rigorous RCT?

There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that:

  • Randomly assigns participants to a treatment and control group
  • Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins
  • Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total.

Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023.

We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020.

The main theoretical approaches:

Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to change eating behavior.

Persuasion studies typically provide arguments about animal welfare, health, and environmental welfare reasons to reduce MAP consumption. For instance, Jalil et al. (2023) switched out a typical introductory economics lecture for one on the health and environmental reasons to cut back on MAP consumption, and then tracked what students ate at their college’s dining halls. Animal welfare appeals often used materials from advocacy organizations and were often delivered through videos and pamphlets. Most studies in our dataset are persuasion studies.

Choice architecture studies change aspects of the contexts in which food is selected and consumed to make non-MAP options more appealing or prominent. For example, Andersson and Nelander (2021) randomly alter whether the vegetarian option occurs on the top of a university cafeteria’s billboard menu or not. Choice architecture approaches are very common in the broader food literature, but only two papers met our inclusion criteria; hypothetical outcomes and/or immediate measurement were common reasons for exclusion.

Psychology studies manipulate the interpersonal, cognitive, or affective factors associated with eating MAP. The most common psychological intervention is centered on social norms seeking to alter the perceived popularity of non-MAP dishes, e.g. two studies by Gregg Sparkman and colleagues. In another study, a university cafeteria put up signs stating that “[i]n a taste test we did at the [name of cafe], 95% of people said that the veggie burger tasted good or very good!” One study told participants that people who ate meat are more likely to endorse social hierarchy and embrace human dominance over nature. Other psychological interventions include response inhibition training, where subjects are trained to avoid responding impulsively to stimuli such as unhealthy food, and implementation intentions, where participants list potential challenges and solutions to changing their own behavior.

Finally, some studies combined persuasive and psychological messages, e.g. putting up a sign about how veggie burgers are popular along with a message about their environmental benefits, or combining reasons to cut back on MAP consumption along with an opportunity to pledge to do so.

Results: consistently small effects

We convert all reported results to a measure of standardized mean differences (SMD) and meta-analyze them using the robumeta package in R. An SMD = 1 indicates an average change equal to one standard deviation.

Our overall pooled estimate is SMD = 0.07 (95% CI: [0.02, 0.12]). Table 1 displays effect sizes separated by theoretical approach and by type of persuasion.

Most of these effect sizes and upper confidence bounds are quite small. The largest effect size, which is associated with choice architecture, comes from too few studies to say anything meaningful about the approach in general.

Table 2 presents results associated with different study characteristics. Note that these meta-regression estimates are not causal estimates of the effect of a study characteristic because characteristics were not randomly assigned.

Probably the most striking result here is the comparatively large effect size associated with studies aimed at reducing RPM consumption (SMD = 0.25, 95% CI: [0.11, 0.38]). We speculate that reducing RPM consumption is generally perceived as easier and more socially normative than cutting back on all categories of MAP. (It’s not hard to find experts in newspapers saying things like: “Who needs steak when there’s bacon and fried chicken?”)

Likewise, when we integrate a supplementary dataset of 22 marginal studies, comprising 35 point estimates, that almost met our inclusion criteria, we find a considerably larger pooled effect: SMD = 0.2 (95% CI: [0.09, 0.31]). Unfortunately, this suggests that increased rigor is associated with smaller effect sizes in this literature, and that prior literature reviews which pooled a wider variety of designs and measurement strategies may have produced inflated estimates.

Where do we go from here?

When we talk to EAs, we find that they generally accept the idea that behavioral change, particularly around something as ingrained as meat, is a hard problem. But if you read the food literature in general, you might get a different impression: of consumers who are easily influenced by local cues and whose behaviors are highly malleable. For instance, studies that set the default meal choice to be vegetarian at university events sometimes find large effects. But what happens at the next meal, or the day after? Do people eat more meat to compensate? For the most part, we don’t know, although it is definitely possible to measure delayed effects.

Likewise, we encourage researchers to think clearly about the difference between reducing all MAP consumption and reducing just some particular category of it. RPM is of special concern for its environmental and health consequences, but if you care about animal welfare, a society-wide switch from beef to chicken is probably a disaster

On a brighter note, we reviewed a lot of clever, innovative designs that did not meet our inclusion criteria, and we’d love to see these ideas implemented with more rigorous evaluation:

For more, see the paper, our supplement, and our code and data repository.

How has this project changed over time?

Our previous post, describing an earlier stage of this project, reported that environmental and health appeals were the most consistently effective at reducing MAP consumption. However, at that time, we were grouping RPM and MAP studies together. Treating them as separate estimands changed our estimates a lot (and pretty much caused the paper to fall into place conceptually).

Second, we’ve analyzed a lot more literature. In the data section of our code and data repository, you’ll see CSVs that record of all the studies we included in our main analysis; our RPM analysis; a robustness check of studies that didn’t quite make it; the 150+ prior reviews we consulted; and the 900+ studies we excluded. 

Third, Maya Mathur joined the project, and Seth joined Maya’s lab (more on that journey here). Our statistical analyses, and everything else, improved accordingly. 

Happy to answer any questions!

Acknowledgments. Thanks to Alex Berke, Alix Winter, Anson Berns, Dan Waldinger, Hari Dandapani, Adin Richards, Martin Gould, Matt Lerner, and Rye Geselowitz for comments on an early draft. Thanks to Jacob Peacock, Andrew Jalil, Gregg Sparkman, Joshua Tasoff, Lucius Caviola, Natalia Lawrence, and Emma Garnett for help with assembling the database and providing guidance on their studies.Thanks to Sofia Vera Verduzco for research assistance. We gratefully acknowledge funding from the NIH (grant R01LM013866), Open Philanthropy, and the Food Systems Research Fund (Grant FSR 2023-11-07).

Comments20
Sorted by Click to highlight new comments since:

Thanks so much for this very helpful post!

I'm a bit confused about your framing of the takeaway. You state that "reducing meat consumption is an unsolved problem" and that "we conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing meat and animal product consumption." However, the overall pooled effects across the 41 studies show statistical significance w/ a p-value of <1%. Yes, the effect size is small (0.07 SMD) but shouldn't we conclude from the significance that these interventions do indeed work? 

Having a small effect or even a statistically insignificant one isn't something EAs necessarily care about (e.g. most longtermism interventions don't have much of an evidence base). It's whether we can have an expected positive effect that's sufficiently cheap to achieve. In Ariel's comment, you point to a study that concludes its interventions are highly cost-effective at ~$14/ton of CO2eq averted. That's incredible given many offsets cost ~$100/ton or more. So it doesn't matter if the effect is 'small', only that it's cost-effective.

Can you help EA donors take the necessary next step? It won't be straightforward and will require additional cost and impact assumptions, but it'll be super useful if you can estimate the expected cost-effectiveness of different diet-change interventions (in terms of suffering alleviated).

Finally, in addition to separating out red meat from all animal product interventions, I suspect it'll be just as useful to separate out vegetarian from vegan interventions. It should be much more difficult to achieve persistent effects when you're asking for a lot more sacrifice. Perhaps we can get additional insights by making this distinction?

Hi Wayne,

Great questions, I'll try to give them the thoughtful treatment they deserve.

  1. We don't place much (any?) credence in the statistical significance of the overall result, and I recognize that a lot of work is being done by the word "meaningfully" in "meaningfully reducing." For us, changes on the order of a few percentage points -- especially given relatively small samples & vast heterogeneity of designs and contexts (hence our point about "well-validated" -- almost nothing is directly replicated out of sample in our database) -- are not the kinds of transformational change that others in this literature have touted. Another way to slice this, if you were looking to evaluate results based on significance, is to look at how many results are, according to their own papers, statistical nulls: 95 out of 112, or about 85%. (On the other hand, many of these studies might be finding small but real effects but not be sufficiently powered to identify them: If you plan for d > 0.4, an effect of d = 0.04 is going to look like a null, even if real changes are happening). So my basic conclusion is that marginal changes probably are possible, so in that sense, yes, many of these interventions probably "work," but I wouldn't call the changes transformative. I think the proliferation of GLP-1 drugs is much more likely to be transformative.
  2. It's true that cost-effectiveness estimates might still be very good even if the results are small. If there was a way to scale up the Jalil et al. intervention, I'd probably recommend it right away. But I don't know of any such opportunity. (It requires getting professors to substitute out a normal economics lecture for one focused on meat consumption, and we'd probably want at least a few other schools to do measurement to validate the effect, and my impression from talking to the authors is that measurement was a huge lift). I also think that choice architecture approaches are promising and awaiting a new era of evaluation. My lab is working on some of these; for someone interested in supporting the evaluation side of things, donating to the lab might be a good fit.
  3. This is in the supplement rather than the paper, but one of our depressing results is that rigorous evaluations published by nonprofits, such as The Humane League, Mercy For Animals, and Faunalytics, produce a small backlash on average (see table below). But it's also my impression that a lot of these groups have changed gears a lot, and are now focusing less on (e.g.) leafletting and direct persuasion efforts and more on corporate campaigns, undercover investigations, and policy work. I don't know if they have moved this direction specifically because a lot of their prior work was showing null/backlash results, but in general I think this shift is a good idea given the current research landscape.

    4. Pursuant to that, economists working on this sometimes talk about the consumer-citizen gap, where people will support policies that ban practices whose products they'll happily consume. (People are weird!) For my money, if I were a significant EA donor on this space, I might focus here: message testing ballot initiatives, preparing for lengthy legal battles, etc. But as always with these things, the details matter. If you ban factory farms in California and lead Californians to source more of their meat from (e.g.) Brazil, and therefore cause more of the rainforest to be clearcut -- well that's not obviously good either.

    5. Almost all interventions in our database targeted meat rather than other animal products (one looked at fish sauce and a couple also measured consumption of eggs and dairy). Also a lot of studies just say the choice was between a meat dish and a vegetarian dish, and whether that vegetarian dish contained eggs or milk is sometimes omitted. But in general, I'd think of these as "less meat" interventions.

Sorry I can't offer anything more definitive here about what works and where people should donate.  An economist I like says his dad's first rule of social science research was: "Sometimes it’s this way, and sometimes it’s that way," and I suppose I hew to that 😃 

Veganuary has "calculated that roughly 25 million people worldwide chose to try vegan this January [by participaing in Veganuary in 2024]". Do you have a guess for the reduction in the consumption of animal-based foods linked to those 25 M people caused by all Veganuary's activities in 2024, including corporate engagement, and effects in years after 2024, as a fraction of what their (counterfactual) consumption of animal-based foods in 2024 without Veganuary? I guess 1.5 % (= (0.03 + 0)/2), corresponding to an effect size decreasing from 3 % to 0 over 1 year. I suppose an initial reduction of 3 % because Seth mentioned the studies you analysed showed “changes on the order of a few percentage points”, and I guess these concern a short time period.

I don’t know, sorry. There would be a lot of additional assumptions needed to extrapolate from the RCTs we analyze to this.

Thanks for this research! Do you know whether any BOTECs have been done where an intervention can be said to create X vegan-years per dollar? I've been considering writing an essay pointing meat eaters to cost-effective charitable offsets for meat consumption. So far, I haven't found any rigorous estimates online.

(I think farmed animal welfare interventions are likely even more cost-effective and have a higher probability of being net positive. But it seems really difficult to know how to trade off the moral value of chickens taken out of cages / shrimp stunned versus averting some number of years of meat consumption.)

👋 Our pleasure!

To the best of my recollection, the only paper in our dataset that provides a cost-benefit estimation is Jalil et al. (2023)

Calculations indicate a high return on investment even under conservative assumptions (~US$14 per metric ton CO2eq). Our findings show that informational interventions can be cost effective and generate long-lasting shifts towards more sustainable food options.

There's also a red/processed meat study --- Emmons et al. (2005) --- that does some cost-effectiveness analyses, but it's almost 20 years old and its reporting is really sparse: changes to the eating environment "were not reported in detail, precluding more detailed analyses of this intervention." So I'd stick with Jalil et al. to get a sense of ballpark estimates.

I work at a university in China, and with the help of some vegetarian students, I’ve been trying to encourage others to eat less meat. However, I’ve found it challenging to engage students who aren’t already interested in vegetarianism.

For instance, last semester, I organized a Meatless Monday Lunch every week. The same group of people I already knew would attend, but it didn’t attract new participants. I even offered free lunches to students to make it more appealing, but that didn’t seem to help.

I also hosted a documentary screening about the health effects of eating meat. Attendance was very low—fewer than 10 people showed up—and most of them seemed distracted, spending their time on their phones.

On the bright side, our canteen has improved its plant-based options with our help. I think this may encourage more people to try them. Unfortunately, I don’t have access to the canteen’s data, so I’m not sure if this idea actually worked. Personally it did make eating at the canteen a bit more pleasant.

Thank you for sharing your work and taking an active stance to promote vegetarianism. In my personal experience of becoming vegetarian, two things stand out that may be relevant:

  1. I’m quite sure my own decision has significantly changed how much meat my family and friends consume, both in the meals we cook and share as well as their comments suggesting that they’re trying to take my lead and eat a little less meat.
  2. Discovering a wide variety of delicious and easy to cook vegetarian food has made the switch easier for me and also encouraged my family to cook those dishes more frequently.

My point is that these small and seemingly self directed actions (I never actively tried to convince others) can make a difference. Striving for wider impact is important, but don't undervalue your individual efforts!

Definitely! When I went vegan, I prompted someone I know to look up how dairy cows are treated (not well), and they changed their diet quite a bit in light of that. So I have seen downstream effects personally.  Caveat that I am annoying and prone to evangelize.

And if i were going to promote one definitely-not-scalable intervention to one very-hard-to-reach-population, I would take a bunch of die-hard meat eaters to Han Dynasty on the upper west side of Manhattan and order 1) DanDan noodles without pork 2) pea leaves with garlic 3) cumin tofu 4) kung pao tofu and 5) eggplant in garlic sauce for the table,  and then just be "like hello is this not delicious??" every 30 seconds 😃

That sounds very interesting!

Making things more pleasant for vegetarians and vegans is a good thing to do, even if it does not change other people's behavior too much. 

In the long-run, we want to make vegetarianism seem just as "nice, natural, and normal" (https://www.sciencedirect.com/science/article/abs/pii/S0195666315001518) as eating meat. 

I think things like a Meatless Monday Lunch are very helpful for that. 

Thanks for sharing! Great work.

Likewise, we encourage researchers to think clearly about the difference between reducing all MAP consumption and reducing just some particular category of it. RPM is of special concern for its environmental and health consequences, but if you care about animal welfare, a society-wide switch from beef to chicken is probably a disaster.

Agreed:

I concluded the harm caused to humans by the annual GHG emissions of a random person is 0.0660 DALY, and that caused to farmed animals by their annual food consumption is 4.04 DALY, i.e. 61.2 times as much. In my mind, this implies one should overwhelmingly focus on minimising animal suffering in the context of food consumption.

Thank you so much for this research. Is there a more intuitive way to interpret SMD values? For example, how many standard deviations is an average vegetarian away from the average person in the general population?

Thank you for your kind words!

putting SMDs into sensible terms is a continual struggle. I don't think it'll be easy to put vegetarians and meat eaters on a common scale because if vegetarians are all clustered around zero meat consumption, then the distance between vegs and meat eaters is just entirely telling you how much meat the meat eater group eats, and that changes a lot between populations.

Also, different disciplines have different ideas about what a 'big' effect size is. Andrew Gelman writes something I like about this:

the first problem I noticed with that meta-analysis was an estimated average effect size of 0.45 standard deviations. That’s an absolutely huge effect, and, yes, there could be some nudges that have such a large effect, but there’s no way the average of hundreds would be that large. It’s easy, though, to get such a large estimate by just averaging hundreds of estimates that are subject to massive selection bias. So it’s no surprise that they got an estimate of 0.45, but we shouldn’t take this as an estimate of treatment effects.

But by convention, an SMD of 0.5 is typically just considered a 'medium' effect. I tend to agree with Gelman that changing people's behavior by half a standard deviation on average is huge. 

A different approach: here are a few studies, their main findings in normal terms, the SMD that translates to, and whether subjectively that's considered big or small

So, for instance, the absolute change in the third study is a lot smaller than the absolute change in the first but has a bigger SMD because there's less variation in the dependent variable in that setting.

So anyway this is another hard problem. But in general, nothing here is equating to the kind of radical transformation that animal advocates might hope for.

I believe the thing that people would be willing to change their behaviour most for is feeling in-group. Eg, when people know that they are expected to do X, and people around them will know if they do not. But that is very hard to implement.

Agreed that it's hard to implement: much easier to say "vegetarian food is popular at this cafe!' than to convince people that they are expected to eat vegetarian. 

See here for a review of the 'dynamic norms' part of this literature (studies that tell people that vegetarianism is growing in popularity over time): https://osf.io/preprints/psyarxiv/qfn6y

Do you have a sense of the acceptability rates (i.e. what proportions of the treatment population moderately decreased their meat consumption)? Additionally, how did you account for selection effects (i.e. if a study includes vegetarians, those participants presumably wouldn’t see behaviour change)?

My mental model right now is that some small proportion of Western populations are amenable to meat reductions, with a sharp fall-off after this. Using these techniques on less aware populations might work, but we could assume that most high-income Western populations have already been exposed to these techniques and made up their minds. Averaged over a study, seeing a handful of participants change their minds in moderate ways would show a small effect size, or none at all, depending on the recruited population.

But I know very little about this area, so I assume the above is wrong. I just wanted to know in what ways, and what’s borne out by the data you have.

👋 Great questions!

  1. Most studies in our dataset don't report these kinds of fine-grained results, but in general my impression from the texts is that the typical study gets a lot of people to change their behavior a little. (In part because if they got people to go vegan I expect they would say that.)
  2. Some studies deliberately exclude vegetarians as part of their recruitment process, but most just draw from whatever population at large. Somewhere between 2 and 5% of people identify as vegetarians (and many of them eat meat sometimes), so I don't personally worry too much about this curtailing results. A few studies specifically recruit people who are motivated to change their diets and/or help animals, e.g. Cooney (2016) recruited people who wanted to help Mercy for Animals evaluate its materials.
  3. I think this is a fair mental model, but I think one of the main open questions of our paper is about how do we recruit people to cut back on meat in general vs. just cutting back on a few categories, e.g. red and processed meat. So I guess my mental model is that most people have heard that raising cows is bad for the environment and those who are cutting back are substituting partly to plant-based substitutes (reps from Impossible Foods noted at a recent meeting that most of their customers also eat meat) and partly to chicken and fish, e.g. the Mayo Clinic's page on heart-healthy diets suggests "Lean meat, poultry and fish; low-fat or fat-free dairy products; and eggs are some of the best sources of protein...Fish is healthier than high-fat meats", although it also says that "Eating plant protein instead of animal protein lowers the amounts of fat and cholesterol you take in." 

So I'd say we still have a lot of open questions...

Executive summary: A meta-analysis of randomized controlled trials finds no well-validated approaches for reducing overall meat and animal product consumption, though reducing specifically red and processed meat consumption shows more promise.

Key points:

  1. Analysis of 35 papers (41 studies, 112 interventions) shows very small overall effects (SMD = 0.07) for reducing meat consumption
  2. Main intervention approaches tested: Persuasion, Choice Architecture, Psychology, and combinations - none showed strong effectiveness
  3. Red/processed meat reduction specifically showed larger effects (SMD = 0.25) than general meat reduction, but may lead to increased chicken/fish consumption
  4. More rigorous studies tend to show smaller effects than less rigorous ones, suggesting previous literature may overestimate intervention effectiveness
  5. Many promising approaches (e.g., extended contact with farm animals, price manipulations, disgust activation) await rigorous evaluation

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Thanks for sharing the details of this research -  it is very valuable towards arriving at an accurate assessment of various interventions.  

One question with regard to the methodology of these RCTs is when and for how long did they record the consumption pattern of the participants following the intervention? Specifically, do we have any insights on short-term vs long-term impact of such interventions focused on behavioral change? 

 

Also,  I understand that you report the results as SMD. However, it is quite likely that there is a small minority in the treatment group in these interventions that probably contribute to most of the difference that is observed. Do we know anything about the percentage of individuals who are likely to make considerable changes to their dietary patterns based on these interventions? 

Hi there,

  1. Delays run the gamut. Jalil et al (2023) measure three years worth of dining choices, Weingarten et al. a few weeks; other studies are measuring what’s eaten at a dining hall during treatment and control but with no individual outcomes; and other studies are structured recall tasks like 3/7/30 days after treatment that ask people to say what they ate in a 24 hour period or over a given week. We did a bit of exploratory work on the relationship between length of delay and outcome size and didn’t find anything interesting.

  2. I’m afraid we don’t know that overall. A few studies did moderator analysis where they found that people who scored high on some scale or personality factor tended to reduce their MAP consumption more, but no moderator stood out to us as a solid predictor here. Some studies found that women seem more amenable to messaging interventions, based on the results of Piester et al. 2020 and a few others, but some studies that exclusively targeted women found very little. I think gendered differences are interesting here but we didn't find anything conclusive.

Curated and popular this week
Relevant opportunities