I am a Research Scientist at the Humane and Sustainable Food Lab at Stanford.
the lab I work at is seeking collaborators! More here.
If you want to write a meta-analysis, I'm happy to consult! I think I know something about what kinds of questions are good candidates, what your default assumptions should be, and how to delineate categories for comparisons
A final reflective note: David, I want to encourage you to think about the optics/politics of this exchange from the point of view of prospective Unjornal participants/authors. There are no incentives to participate. I did it because I thought it would be fun and I was wondering if anyone would have ideas or extensions that improved the paper. Instead, I got some rather harsh criticisms implying we should have written a totally different paper. Then I got this essay, which was unexpected/unannounced and used, again, rather harsh language to which I objected. Do you think this exchange looks like an appealing experience to others? I'd say the answer is probably not.
A potential alternative: I took a grad school seminar where we replicated and extended other people's papers. Typically the assignment was to do the robustness checks in R or whatever, and then the author would come in and we'd discuss. It was a great setup. It worked because the grad students actually did the work, which provided an incentive to participate for authors. The co-teachers also pre-selected papers that they thought were reasonably high-quality, and I bet that if they got a student response like Matthew's, they would have counseled them to be much more conciliatory, to remember that participation is voluntary, to think through the risks of making enemies (as I counseled in my original response), etc. I wonder if something like that would work here too. Like, the expectation is that reviewers will computationally reproduce the paper, conduct extensions and robustness checks, ask questions if they have them, work collaboratively with authors, and then publish a review summarizing the exchange. That would be enticing! Instead what I got here was like a second set of peer reviewers, and unusually harsh ones at that, and nobody likes peer review.
It might be the case that meta-analyses aren't good candidates for this kind of work, because the extensions/robustness checks would probably also have taken Matthew and the other responder weeks, e.g. a fine end of semester project for class credit but not a very enticing hobby.
Just a thought.
For what it's worth, I thought David's characterization of the evaluations was totally fair, even a bit toned down. E.g. this is the headline finding of one of them:
major methodological issues undermine the study's validity. These include improper missing data handling, unnecessary exclusion of small studies, extensive guessing in effect size coding, lacking a serious risk-of-bias assessment, and excluding all-but-one outcome per study.
David characterizes these as "constructive and actionable insights and suggestions". I would say they are tantamount to asking for a new paper, especially the excluding of small studies, which was core to our design and would require a whole new search, which would take months. To me, it was obvious that I was not going to do that (the paper had already been accepted for publication at that point). The remaining suggestions also implied dozens ( hundreds?) of hours of work. Spending weeks satisfying two critics didn't pass a cost-benefit test.[1] It wasn't a close call.
really need to follow my own advice now and go actually do other projects 😃
@geoffrey We'd love to run a megastudy! My lab put in a grant proposal with collaborators at a different Stanford lab to do just that but we ultimately went a different direction. Today, however, I generally believe that we don't even know what is the right question to be asking -- though if I had to choose one it would be, what ballot intiative does the most for animal welfare while also getting the highest levels of public support, e.g. is there some other low-hanging fruit equivalent to "cage free" like "no mutilation" that would be equally popular. But in general I think we're back to the drawing board in terms of figuring out what is the study we want to run and getting a version of it off the ground, before we start thinking about scaling up to tens of thousands of people.
@david_reinstein, I suppose any press is good press so I should be happy that you are continuing to mull on the lessons of our paper 😃 but I am disappointed to see that the core point of my responses is not getting through. I'll frame it explicitly here: when we did one check and not another, or one one search protocol and not another, the reason, every single time, is opportunity costs. When I say "we thought it made more sense to focus on the risks of bias that seemed most specific to this literature," I am using the word 'focus' deliberately, in the sense of "focus means saying no." In other words, because of opportunity costs, we are always triaging. At every juncture, navigating the explore/exploit dilemma requires judgment calls. You don't have to like that I said no to you, but it's not a false dichotomy, and I do not care for that characterization.
To the second question of whether anyone will do the kind of extension work, I personally see this as a great exercise for grad students. I did all kinds of replication and extension work in grad school. A deep dive into a subset of contact hypothesis literature I did in a political psychology class in 2014 , which started with a replication attempt, eventually morphed into The Contact Hypothesis Re-evaluated. If a grad student wanted to do this kind of project, please be in touch, I'd love to hear from you.
That's interesting, but not what I'm suggesting. I'm suggesting something that would, e.g., explain why you tell people to "ignore the signs of my estimates for the total welfare" when you share posts with them. That is a particular style and it says something about whether one should take your work in a literal spirit or not, which falls under the meta category of why you write the way you write; and to my earlier point, you're sharing this suggestion here with me in a comment rather than in the post itself 😃 Finally, the fact that there's a lot of uncertainty about whether wild animals have positive or negative lives is exactly the point I raised about why I have trouble engaging with your work. The meta post I am suggesting, by contrast, motivate and justify this style of reasoning as a whole, rather than providing a particular example of it. The post you've shared is a link in a broader chain. I'm suggesting you zoom out and explain what you like about this chain and why you're building it.
(Vasco asked me to take a look at this post and I am responding here.)
Hi Vasco,
I've been taking a minute to reflect on what I want to say about this kind of project. A few different thoughts, at a few different levels of abstraction.
I am amenable to this argument and generally skeptical of longtermism on practical grounds. (I have a lot of trouble thinking of someone 300-500 years ago plausibly doing anything with my interests in mind that actually makes a difference. Possible exceptions include folks associated with the Gloriois Revolution.)
I think the best counterargument is that it’s easier to set things on a good course than to course correct. Analogy: easier to found Google, capitalizing on advertisers’ complacency, than to fix advertising from within; easier to create Zoom than to get Microsoft to make Skype good.
Im not saying this is right but I think that is how I would try to motivate working on longtermism if I did (work on longtermism).
Love these questions, and love talking nitty gritty of meta-analysis 😃
> Too often, research syntheses focus solely on estimating effect sizes, regardless of whether the treatments are realistic, the outcomes are assessed unobtrusively, and the key features of the experiment are presented in a transparent manner. Here we focus on what we term landmark studies, which are studies that are exceptionally well-designed and executed (regardless of what they discover). These studies provide a glimpse of what a meta-analysis would reveal if we could weight studies by quality as well as quantity.