1st year PhD student in Agricultural and Resource Economics at Berkeley. Likes animal welfare, development economics and impact evaluation. Past lives at World Bank, IMF, and doing software engineering.
Chatting about the intersection of animal welfare, economics, and development.
Happy to chat about
- teaching yourself to code and getting a software engineer role
- junior roles at either World Bank or IMF (I can't do referrals though!)
- picking a Master's program for transitioning into public policy
- career considerations from a less privileged background
- learning math
- self-esteem, anxiety, and mental health issues
Best way to reach me is geoffreyyip@fastmail.com
Would you recommend that I share any such posts with both the authors and the evaluators before making them?
Yes. But zooming back out, I don't know if these EA Forum posts are necessary.
A practice I saw i4replication (or some other replication lab) is that the editors didn't provide any "value-added" commentary on any given paper. At least, I didn't see these in any tweets they did. They link to the evaluation reports + a response from the author and then leave it at that.
Once in a while, there will be a retrospective on how the replications are going as a whole. But I think they refrain from commenting on any paper.
If I had to rationalize why they did that, my guess is that replications are already an opt-in thing with lots of downside. And psychologically, editor commentary has a lot more potential for unpleasantness. Peer review tends to be anonymous so it doesn't feel as personal because the critics are kept secret. But editor commentary isn't secret...actually feels personal, and editors tend to have more clout.
Basically, I think the bar for an editor commentary post like this should be even higher than the usual process. And the usual evaluation process already allows for author review and response. So I think a "value-added" post like this should pass a higher bar of diplomacy and insight.
Chiming in here with my outsider impressions on how fair the process seems
@david_reinstein If I were to rank the evaluator reports, evaluation summary, and the EA Forum post in which ones seemed the most fair, I would have ranked the Forum post last. It wasn't until I clicked through to the evaluation reports that I felt the process wasn't so cutting.
Let me focus on one very specific framing in the Forum post, since it feels representative. One heading includes the phrase "this meta-analysis is not rigorous enough". This has a few connotations that you probably didn't mean. One, this meta-analysis is much worse than others. Two, the claims are questionable. Three, there's a universally correct level of quality that meta-analyses should reach and anything that falls short of that is inadmissible as evidence.
In reality, it seems this meta-analysis is par for the course in terms of quality. And it was probably more difficult to do so given the heterogeneity in the literature. And the central claim of the meta-analysis doesn't seem like something either evaluator disputed (though one evaluator was hesitant).
Again, I know that's not what you meant and there are many caveats throughout the post. But it's one of a few editorial choices that make the Forum post seem much more critical than the evaluation reports, which is a bit unusual since the Evaluators are the ones who are actually critiquing the paper.
Finally, one piece of context that felt odd not to mention was the fundamental difficulty of finding an expert in both food consumption and meta-analysis. That limits the ability of any reviewer to make a fair evaluation. This is acknowledged at the bottom of the Evaluation Summary. Elsewhere, I'm not sure where it's said. Without that mentioned, I think it's easy for a casual reader to leave thinking the two Evaluators are the "most correct".
Really enjoyed this. Not much public debate in this space as far as I can see. To two of your cruxes:
Is meta-analysis even useful in these contexts, with heterogeneous interventions, outcomes, and analytical approaches?
Will anyone actually do/fund/reward rigorous continued work?
I've sometimes wondered if it'd be worth funding a "mega study" like Milkman et al. (2021). They tested 54 different interventions to boost exercise among 61,000 members. Something similar for meat reduction could allow for some clean apples-to-apples comparisons.
I've seen the number $2.6 million floating around for how much this intervention costs. Granted, that's probably on top of convincing the mega-team of researchers to work on the project, which might only happen through the prestige of an academic lab. But it's also not an astronomical cost. And there'd be still some learning value from a smaller set of interventions and a smaller sample.
This might be a better use of resources than striving for the "ideal" meta-analysis, since that sounds expensive too.
Agree it's more about upbringing and messaging. And also relate a lot to this.
But also I think it's really hard to tell the "cause" of any given problem at an individual level. As recently as a few years ago, I would have put 80% weight on upbringing / messaging (which I agree aren't the identities themselves but something associated with them). Nowadays I'm more agnostic about it.
I think it's fine to seek out affinity groups and culturally-relevant advice to some degree. But also, there's a tradeoff between exploring identities versus applying generic mental health advice. Especially when you get to intersectionality-type stuff like trifectas where the number of things to explore is gets incredibly vast very quickly.
I can speak to two of those three identities (EA and Asian). I think one possibility that took me an unusually long time to consider was that maybe my identities didn't matter and I'd still feel the same problems if I was the "default person" in society. And I was working through a lot of identities.
It's a weird way of framing things since we can't have our identities counterfactually removed. Even if we did, we wouldn't be the same person. But I think it's a framework that usually doesn't get mentioned much in mental health circles , especially on the internet. Partly because it feels invalidating, partly because most people really want contextual advice, and partly because it feels "emotionally dumb and ignorant" to downplay sociological factors.
To do some fake math on this, if we could decompose mental health problems into the triple Venn diagram of Asian-women-EA (which is 6 different things if you count up the intersectionalities!) and include stuff outside that, it's possible for the Asian-women-EA sources of stress to be maybe only 10-25%.
Basically, part of the challenge of identity is not just figuring out if it matters but also how much. And maybe that amount is ultimately a small thing. Or maybe it's not as tractable as working on the identity-less portions
Agree the value is high. But practically, there's two big questions that pop to mind since I work / study around this area:
This is really good.
What struck me was all the concrete detail. While it is personal, it's also in service of giving useful lessons to other people. It helps establish how generalizable the career advice would be to other people and it reframes some standard career advice in a way that centers the constraints as a first-order consideration.
I would not have taken the adversity quotient framing seriously otherwise.
The one addition that might help is mentioning whether there were aspects of your career path that felt unusually lucky or aspects of your life circumstances that felt strong relative to others in your situation. Structural barriers can be a subtle thing (like someone getting a decent math education because they went to a decent school in a bad neighborhood). Mostly this helps with generalizability to readers.
Do any of you have heuristics for when to “give up” or “pivot” in a job search? Examples could be aiming lower / differently if no response after 10 applications.
Thankfully this is not something I have to worry about for a long time. But I think it’s useful to have some balance to the usual advice of “just keep trying; job searching takes a long time”. Sometimes a job really is unrealistic for a person’s current profile (let’s operationalize that as 1000 job searching hours would still only result in a 1% chance of getting a certain set of jobs).
Seth, for what it's worth, I found your hourly estimates (provided in these forum comments but not something I saw in the evaluator response) on how long the extensions would take to be illuminating. Very rough numbers like this meta-analysis taking 1000 hours for you or a robustness check taking dozens / hundreds of hours more to do properly helps contextualize how reasonable the critiques are.
It's easy for me (even now while pursuing research, but especially before when I was merely consuming it) to think these changes would take a few days.
It's also gives me insight into the research production process. How long does it take to do a meta-analysis? How much does rigor cost? How much insight does rigor buy? What insight is possible given current studies? Questions like that help me figure out whether a project is worth pursuing and whether it's compatible with career incentives or more of a non-promotable task