Seth Ariel Green

Research Scientist @ Humane and Sustainable Food Lab
1001 karmaJoined Working (6-15 years)New York, NY, USA
setharielgreen.com

Bio

Participation
1

I am a Research Scientist at the Humane and Sustainable Food Lab at Stanford and a nonresident fellow at the Kahneman-Treisman Center at Princeton. By trade, I am a meta-analyst.


 

Comments
123

Topic contributions
1

That sounds very interesting!

Making things more pleasant for vegetarians and vegans is a good thing to do, even if it does not change other people's behavior too much. 

In the long-run, we want to make vegetarianism seem just as "nice, natural, and normal" (https://www.sciencedirect.com/science/article/abs/pii/S0195666315001518) as eating meat. 

I think things like a Meatless Monday Lunch are very helpful for that. 

Hi there,

  1. Delays run the gamut. Jalil et al (2023) measure three years worth of dining choices, Weingarten et al. a few weeks; other studies are measuring what’s eaten at a dining hall during treatment and control but with no individual outcomes; and other studies are structured recall tasks like 3/7/30 days after treatment that ask people to say what they ate in a 24 hour period or over a given week. We did a bit of exploratory work on the relationship between length of delay and outcome size and didn’t find anything interesting.

  2. I’m afraid we don’t know that overall. A few studies did moderator analysis where they found that people who scored high on some scale or personality factor tended to reduce their MAP consumption more, but no moderator stood out to us as a solid predictor here. Some studies found that women seem more amenable to messaging interventions, based on the results of Piester et al. 2020 and a few others, but some studies that exclusively targeted women found very little. I think gendered differences are interesting here but we didn't find anything conclusive.

Hi Wayne,

Great questions, I'll try to give them the thoughtful treatment they deserve.

  1. We don't place much (any?) credence in the statistical significance of the overall result, and I recognize that a lot of work is being done by the word "meaningfully" in "meaningfully reducing." For us, changes on the order of a few percentage points -- especially given relatively small samples & vast heterogeneity of designs and contexts (hence our point about "well-validated" -- almost nothing is directly replicated out of sample in our database) -- are not the kinds of transformational change that others in this literature have touted. Another way to slice this, if you were looking to evaluate results based on significance, is to look at how many results are, according to their own papers statistical nulls: 95 out of 112, or about 85%. (On the other hand, may of these studies might be finding small but real effects but not just be sufficiently powered to identify them: If you expect d > 0.4 because you read past optimistic reviews, an effect of d = 0.04 is going to look like a null, even if real changes are happening). So my basic conclusion is that marginal changes probably are possible, so in that sense, yes, many of these interventions probably "work," but I wouldn't call the changes transformative. I think the proliferation of GLP-1 drugs is much more likely to be transformative.
  2. It's true that cost-effectiveness estimates might still be very good even if the results are small. If there was a way to scale up the Jalil et al. intervention, I'd probably recommend it right away. But I don't know of any such opportunity. (It requires getting professors to substitute out a normal economics lecture for one focused on meat consumption, and we'd probably want at least a few other schools to do measurement to validate the effect, and my impression from talking to the authors is that measurement was a huge lift). I also think that choice architecture approaches are promising and awaiting a new era of evaluation. My lab is working on some of these; for someone interested in supporting the evaluation side of things, donating to the lab might be a good fit.
  3. This is in the supplement rather than the paper, but one of our depressing results is that rigorous evaluations published by nonprofits, such as The Humane League, Mercy For Animals, and Faunalytics, produce a small backlash on average (see table below). But it's also my impression that a lot of these groups have changed gears a lot, and are now focusing less on (e.g.) leafletting and direct persuasion efforts and more on corporate campaigns, undercover investigations, and policy work. I don't know if they have moved this direction specifically because a lot of their prior work was showing null/backlash results, but in general I think this shift is a good idea given the current research landscape.

    4. Pursuant to that, economists working on this sometimes talk about the consumer-citizen gap, where people will support policies that ban practices whose products they'll happily consume. (People are weird!) For my money, if I were a significant EA donor on this space, I might focus here: message testing ballot initiatives, preparing for lengthy legal battles, etc. But as always with these things, the details matter. If you ban factory farms in California and lead Californians to source more of their meat from (e.g.) Brazil, and therefore cause more of the rainforest to be clearcut -- well that's not obviously good either.

    5. Almost all interventions in our database targeted meat rather than other animal products (one looked at fish sauce and a couple also measured consumption of eggs and dairy). Also a lot of studies just say the choice was between a meat dish and a vegetarian dish, and whether that vegetarian dish contained eggs or milk is sometimes omitted. But in general, I'd think of these as "less meat" interventions.

Sorry I can't offer anything more definitive here about what works and where people should donate.  An economist I like says his dad's first rule of social science research was: "Sometimes it’s this way, and sometimes it’s that way," and I suppose I hew to that 😃 

👋 Great questions!

  1. Most studies in our dataset don't report these kinds of fine-grained results, but in general my impression from the texts is that the typical study gets a lot of people to change their behavior a little. (In part because if they got people to go vegan I expect they would say that.)
  2. Some studies deliberately exclude vegetarians as part of their recruitment process, but most just draw from whatever population at large. Somewhere between 2 and 5% of people identify as vegetarians (and many of them eat meat sometimes), so I don't personally worry too much about this curtailing results. A few studies specifically recruit people who are motivated to change their diets and/or help animals, e.g. Cooney (2016) recruited people who wanted to help Mercy for Animals evaluate its materials.
  3. I think this is a fair mental model, but I think one of the main open questions of our paper is about how do we recruit people to cut back on meat in general vs. just cutting back on a few categories, e.g. red and processed meat. So I guess my mental model is that most people have heard that raising cows is bad for the environment and those who are cutting back are substituting partly to plant-based substitutes (reps from Impossible Foods noted at a recent meeting that most of their customers also eat meat) and partly to chicken and fish, e.g. the Mayo Clinic's page on heart-healthy diets suggests "Lean meat, poultry and fish; low-fat or fat-free dairy products; and eggs are some of the best sources of protein...Fish is healthier than high-fat meats", although it also says that "Eating plant protein instead of animal protein lowers the amounts of fat and cholesterol you take in." 

So I'd say we still have a lot of open questions...

👋 Our pleasure!

To the best of my recollection, the only paper in our dataset that provides a cost-benefit estimation is Jalil et al. (2023)

Calculations indicate a high return on investment even under conservative assumptions (~US$14 per metric ton CO2eq). Our findings show that informational interventions can be cost effective and generate long-lasting shifts towards more sustainable food options.

There's also a red/processed meat study --- Emmons et al. (2005) --- that does some cost-effectiveness analyses, but it's almost 20 years old and its reporting is really sparse: changes to the eating environment "were not reported in detail, precluding more detailed analyses of this intervention." So I'd stick with Jalil et al. to get a sense of ballpark estimates.

Agreed that it's hard to implement: much easier to say "vegetarian food is popular at this cafe!' than to convince people that they are expected to eat vegetarian. 

See here for a review of the 'dynamic norms' part of this literature (studies that tell people that vegetarianism is growing in popularity over time): https://osf.io/preprints/psyarxiv/qfn6y

Thank you for your kind words!

putting SMDs into sensible terms is a continual struggle. I don't think it'll be easy to put vegetarians and meat eaters on a common scale because if vegetarians are all clustered around zero meat consumption, then the distance between vegs and meat eaters is just entirely telling you how much meat the meat eater group eats, and that changes a lot between populations.

Also, different disciplines have different ideas about what a 'big' effect size is. Andrew Gelman writes something I like about this:

the first problem I noticed with that meta-analysis was an estimated average effect size of 0.45 standard deviations. That’s an absolutely huge effect, and, yes, there could be some nudges that have such a large effect, but there’s no way the average of hundreds would be that large. It’s easy, though, to get such a large estimate by just averaging hundreds of estimates that are subject to massive selection bias. So it’s no surprise that they got an estimate of 0.45, but we shouldn’t take this as an estimate of treatment effects.

But by convention, an SMD of 0.5 is typically just considered a 'medium' effect. I tend to agree with Gelman that changing people's behavior by half a standard deviation on average is huge. 

A different approach: here are a few studies, their main findings in normal terms, the SMD that translates to, and whether subjectively that's considered big or small

So, for instance, the absolute change in the third study is a lot smaller than the absolute change in the first but has a bigger SMD because there's less variation in the dependent variable in that setting.

So anyway this is another hard problem. But in general, nothing here is equating to the kind of radical transformation that animal advocates might hope for.

I emailed this to my lab group — I got a lot out of last summer’s conference 👍

That’s interesting, I’m not sure what accounts for the differences (this is not my research area). If anything I would expect demand for the booster to be more price sensitive than for the initial dose.

Hi Victoria, thanks for asking!

The stats classes I took in grad school typically had problem sets in R, so I learned that. I got better at it in summer 2016 when I used it for the paper I worked on. The first real job I got in tech was doing technical support for academic researchers who were using a computational reproducibility platform, so knowing a bit of R and being able to pick up enough of the other languages to get by -- mostly some shell scripting and package installation commands in Python/Julia/etc. -- was helpful. Mostly I just learned the bits and pieces I needed to know and didn't really approach the question systematically.

The data analyst job I got was in an R shop. If I had been more motivated by the problem and a better fit at the company, I might still be doing that.

Load more