I am a Research Scientist at the Humane and Sustainable Food Lab at Stanford and a nonresident fellow at the Kahneman-Treisman Center at Princeton. By trade, I am a meta-analyst.
Hi Wayne,
Great questions, I'll try to give them the thoughtful treatment they deserve.
This is in the supplement rather than the paper, but one of our depressing results is that rigorous evaluations published by nonprofits, such as The Humane League, Mercy For Animals, and Faunalytics, produce a small backlash on average (see table below). But it's also my impression that a lot of these groups have changed gears a lot, and are now focusing less on (e.g.) leafletting and direct persuasion efforts and more on corporate campaigns, undercover investigations, and policy work. I don't know if they have moved this direction specifically because a lot of their prior work was showing null/backlash results, but in general I think this shift is a good idea given the current research landscape.
4. Pursuant to that, economists working on this sometimes talk about the consumer-citizen gap, where people will support policies that ban practices whose products they'll happily consume. (People are weird!) For my money, if I were a significant EA donor on this space, I might focus here: message testing ballot initiatives, preparing for lengthy legal battles, etc. But as always with these things, the details matter. If you ban factory farms in California and lead Californians to source more of their meat from (e.g.) Brazil, and therefore cause more of the rainforest to be clearcut -- well that's not obviously good either.
5. Almost all interventions in our database targeted meat rather than other animal products (one looked at fish sauce and a couple also measured consumption of eggs and dairy). Also a lot of studies just say the choice was between a meat dish and a vegetarian dish, and whether that vegetarian dish contained eggs or milk is sometimes omitted. But in general, I'd think of these as "less meat" interventions.
Sorry I can't offer anything more definitive here about what works and where people should donate. An economist I like says his dad's first rule of social science research was: "Sometimes it’s this way, and sometimes it’s that way," and I suppose I hew to that 😃
👋 Great questions!
So I'd say we still have a lot of open questions...
👋 Our pleasure!
To the best of my recollection, the only paper in our dataset that provides a cost-benefit estimation is Jalil et al. (2023)
Calculations indicate a high return on investment even under conservative assumptions (~US$14 per metric ton CO2eq). Our findings show that informational interventions can be cost effective and generate long-lasting shifts towards more sustainable food options.
There's also a red/processed meat study --- Emmons et al. (2005) --- that does some cost-effectiveness analyses, but it's almost 20 years old and its reporting is really sparse: changes to the eating environment "were not reported in detail, precluding more detailed analyses of this intervention." So I'd stick with Jalil et al. to get a sense of ballpark estimates.
Agreed that it's hard to implement: much easier to say "vegetarian food is popular at this cafe!' than to convince people that they are expected to eat vegetarian.
See here for a review of the 'dynamic norms' part of this literature (studies that tell people that vegetarianism is growing in popularity over time): https://osf.io/preprints/psyarxiv/qfn6y
Thank you for your kind words!
putting SMDs into sensible terms is a continual struggle. I don't think it'll be easy to put vegetarians and meat eaters on a common scale because if vegetarians are all clustered around zero meat consumption, then the distance between vegs and meat eaters is just entirely telling you how much meat the meat eater group eats, and that changes a lot between populations.
Also, different disciplines have different ideas about what a 'big' effect size is. Andrew Gelman writes something I like about this:
the first problem I noticed with that meta-analysis was an estimated average effect size of 0.45 standard deviations. That’s an absolutely huge effect, and, yes, there could be some nudges that have such a large effect, but there’s no way the average of hundreds would be that large. It’s easy, though, to get such a large estimate by just averaging hundreds of estimates that are subject to massive selection bias. So it’s no surprise that they got an estimate of 0.45, but we shouldn’t take this as an estimate of treatment effects.
But by convention, an SMD of 0.5 is typically just considered a 'medium' effect. I tend to agree with Gelman that changing people's behavior by half a standard deviation on average is huge.
A different approach: here are a few studies, their main findings in normal terms, the SMD that translates to, and whether subjectively that's considered big or small
So, for instance, the absolute change in the third study is a lot smaller than the absolute change in the first but has a bigger SMD because there's less variation in the dependent variable in that setting.
So anyway this is another hard problem. But in general, nothing here is equating to the kind of radical transformation that animal advocates might hope for.
Hi Victoria, thanks for asking!
The stats classes I took in grad school typically had problem sets in R, so I learned that. I got better at it in summer 2016 when I used it for the paper I worked on. The first real job I got in tech was doing technical support for academic researchers who were using a computational reproducibility platform, so knowing a bit of R and being able to pick up enough of the other languages to get by -- mostly some shell scripting and package installation commands in Python/Julia/etc. -- was helpful. Mostly I just learned the bits and pieces I needed to know and didn't really approach the question systematically.
The data analyst job I got was in an R shop. If I had been more motivated by the problem and a better fit at the company, I might still be doing that.
Hi there,
Delays run the gamut. Jalil et al (2023) measure three years worth of dining choices, Weingarten et al. a few weeks; other studies are measuring what’s eaten at a dining hall during treatment and control but with no individual outcomes; and other studies are structured recall tasks like 3/7/30 days after treatment that ask people to say what they ate in a 24 hour period or over a given week. We did a bit of exploratory work on the relationship between length of delay and outcome size and didn’t find anything interesting.
I’m afraid we don’t know that overall. A few studies did moderator analysis where they found that people who scored high on some scale or personality factor tended to reduce their MAP consumption more, but no moderator stood out to us as a solid predictor here. Some studies found that women seem more amenable to messaging interventions, based on the results of Piester et al. 2020 and a few others, but some studies that exclusively targeted women found very little. I think gendered differences are interesting here but we didn't find anything conclusive.