DR

david_reinstein

Founder and Co-Director @ The Unjournal
4305 karmaJoined Working (15+ years)Monson, MA, USA
davidreinstein.org

Bio

Participation
2

See davidreinstein.org

I'm the Founder and Co-director of The Unjournal; We organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.


Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.

I'm  working to impact EA fundraising and marketing; see https://bit.ly/eamtt

And projects bridging EA, academia, and open science.. see bit.ly/eaprojects

My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.

Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein

and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)

Twitter: @givingtools

Posts
71

Sorted by New

Sequences
1

Unjournal: Pivotal Questions/Claims project + ~EA-funded research evaluation

Comments
909

Topic contributions
9

Project Idea: 'Cost to save a life' interactive calculator promotion


What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.

 This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator? 


The case 

  1. People might really be interested in this… it’s super-compelling (a bit click-baity, maybe, but the payoff is not click bait)!
  2. May make some news headlines too (it’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!)
  3. if people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this conception… to help us build a reality-based impact-based evidence-based community and society of donors
  4. similarly, it could get people thinking about ‘how to really measure impact’ --> consider EA-aligned evaluations more seriously

While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive  in the way I suggest above, and I doubt  they market it heavily.

GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication).  But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it. 

It could also mesh well with academic-linked research so I may have  some ‘Meta academic support ads’ funds that could work with this.
 

Tags/backlinks (~testing out this new feature) 
@GiveWell  @Giving What We Can
Projects I'd like to see 

EA Projects I'd Like to See 
 Idea: Curated database of quick-win tangible, attributable projects 

I like the post and agree with most of it, but I don't understand this point. Can you clarify? To me it seems like the opposite of this.

If EA organizations are seen promoting frugality, their actions could be perceived as an example of the rich promoting their own interests over those of the poor. This would increase the view that EA is an elitist movement.

A quick ~testimonial. Abraham's advice was very helpful to us at Unjournal.org. As our fiscal sponsor was ending its operations we needed to transition quickly. We were able to get a 501(c)3 with not a tremendous amount of effort much quicker than anticipated. 

 in retrospect, there would have been a better decision to form a 501c3 as soon as we had our first grant and had applied for a larger grant. It would have saved us a substantial amount of fees and allowed us to earn interest/investment income on the larger grant. And it's also easier to access tech discounts as a 501(c)(3) rather than a fiscally sponsored organization. 
 

Enjoyed it, a good start.

I like the stylized illustrations but I think a bit more realism (or at least detail) could be helpful. Some of the activities and pain suffered by the chickens was hard to see.

The transition to the factory farm/caged chickens environment was dramatic and the impact I think you were seeking.

One fact-based question which I don't have the answer to -- does this really depict the conditions for chickens where the eggs are labeled as "pasture raised?" I hope so, but I vaguely heard that that was not a rigorously enforced label.

Here's some suggestions from 6 minutes of ChatGPT thinking. (Not all are relevant, e.g., I don't think "Probable Causation" is a good fit here.)

Do you see other podcasts filling the long-form, serious/in-depth, EA-adjacent/aligned niche in areas other than AI? E.g., GiveWell has a podcast, but I'm not sure it's the same sort of thing. There's also Hear This Idea, often Clearer Thinking or  Dwarkesh Patel cover relevant stuff. 

(Aside, was thinking of potentially trying to do a podcast involving researchers and research evaluators linked to The Unjournal; if I thought it could fill a gap and we could do it well, which I'm not sure of.) 

This seems a bit related to the “Pivotal questions”: an Unjournal trial initiative   -- we've engaged with a small group of organizations and elicited some of these -- see here.

To highlight some that seem potentially relevant to your ask:

What are the effects of increasing the availability of animal-free foods on animal product consumption? Are alternatives to animal products actually used to replace animal products, and especially those that involve the most suffering? Which plant-based offerings are being used as substitutes versus complements for animal products and why?

Wellbeing measures/how to convert between DALY and WELLBY welfare measurements on assessing charities and interventions.

Is WELLBY the most appropriate (useful, reliable...) measure [for interventions that may have impacts on mental health]

What is cell-cultured meat likely to cost, by year, as a function of the level of investments made?

How often do countries honor their (international) agreements in the event of large catastrophes (and what determines this?)

How probable is it that cell-cultured meat will gain widespread consumer acceptance, and to what timescale? To what extent will consumers replace conventional meat with cell-cultured meat?

How important is democracy for resilience against global catastrophic risk?

How generalizable is evidence on the effectiveness of corporate animal welfare outreach [in the North] to the Global South?

How much will the US government use subjective forecasting approaches (in the way the DoD does) in the next ~50 years?

Thanks for the thoughts. Note that I'm trying to engage/report here because we're working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.

I generally aim to just summarize and synthesize what the evaluators had written and the authors' response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn't give these as my opinions but rather, the author and the evaluators'. Although I did specifically give 'my take' in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.

And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and 'living/updated' meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I'd like to see others' takes

 

Some clarifications:

The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only 'weigh in' centrally  when they find a strong case suggesting systematic issues/retractions. 

Note that their replications are not 'opt-in':  they aimed to replicate every paper coming out in a set of 'top journals'. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).

I'm not sure what works for them would work for us, though. It's a different exercise. I don't see an easy route towards our evaluations getting attention through 'submitting them to journals' (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the 'journal publication as a static output.)   

Also: I wouldn't characterize this post as 'editor commentary', and I don't think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We're making all our evaluations public, but the evaluators have the option to remain anonymous. 

But your point about a higher-bar is well taken. I'll keep this under consideration. 

 

A final reflective note: David, I want to encourage you to think about the optics/politics of this exchange from the point of view of prospective Unjornal participants/authors. 

I appreciate the feedback. I'm definitely aware that we want to make this attractive to authors and others, both to submit their work and to engage with our evaluations. Note that in addition to asking for author submissions, our team nominates and prioritizes high-profile and potential-high-impact work, and contact authors to get their updates, suggestions, and (later) responses. (We generally only require author permission to do these evaluations from early-career authors at a sensitive point in their career.) We are grateful to you for having responded to these evaluations. 

There are no incentives to participate.

I would disagree with this. We previously had author prizes (financial and reputational) focusing on authors who submitted work for our evaluation. although these prizes are not currently active. I'm keen to revise these prizes when the situation permits (funding and partners).  

But there are a range of other incentives (not directly financial) for authors to submit their work, respond to evaluations and engage in other ways. I provide a detailed author FAQ here. This includes getting constructive feedback, signaling your confidence in your paper and openness to criticism, the potential for highly positive evaluations to help your paper's reputation, visibility, unlocking impact and grants, and more. (Our goal is that these evaluations will ultimately become the object of value in and of themselves, replacing "publication in a journal" for research credibility and career rewards. But I admit that's a long path.)

I did it because I thought it would be fun ad I was wondering if anyone would have ideas or extensions that improved the paper. Instead, I got some rather harsh criticisms implying we should have written a totally different paper. 

I would not characterize the evaluators' reports in this way. Yes, there was some negative-leaning language, which, as you know, we encourage the evaluators to tone down. But there were a range of suggestions (especially from Jané) which I  see as constructive, detailed, and useful, both for this paper and for your future work.  And I don't see this as them suggesting "a totally different paper." To large extent they agreed with the importance of this project, with the data collected, and with many of your approaches. They praised your transparency. They suggested some different methods for transforming and analyzing the data and interpreting the results. 

Then I got this essay, which was unexpected/unannounced and used, again, rather harsh language to which I objected. Do you think this exchange looks like an appealing experience to others? I'd say the answer is probably not.

I think it's important to communicate the results of our evaluations to wider audiences, and not only on our own platform. As I mentioned, I tried to fairly categorize your paper, the nature of the evaluations, and your response. I've adjusted my post above in response to some of your points where there was a case to be made that I was using loaded language, etc.  

Would you recommend that I share any such posts with both the authors and the evaluators before making them? It's a genuine question (to you and to anyone else reading these comments) - I'm not sure the correct answer.

As to your suggestion at the bottom, I will read and consider it more carefully -- it sounds good. 

 

Aside:  I'm still concerned with the connotation of replication, extension, and robustness checking  being something that should be relegated to graduate students and not. This seems to diminish the value and prestige of work that I believe to be of the highest order practical value for important decisions in the animal welfare space and beyond. 

In the replication/robustness checking domain, I think what  i4replication.org is doing is excellent. They're working with both graduate students and everyone from graduate students to senior professors to do this work and treating this as a high-value output meriting direct career rewards. I believe they encourage the replicators to be fair –  excessively conciliatory nor harsh, and focus on the methodology. We are in contact with i4replication.org and hoping to work with them more closely, with our evaluations and “evaluation games” offering grounded suggestions for robustness replication checks. 
 

Load more