DR

david_reinstein

Founder and Co-Director @ The Unjournal
4131 karmaJoined Working (15+ years)Monson, MA, USA
davidreinstein.org

Bio

Participation
2

See davidreinstein.org

I'm the Founder and Co-director of The Unjournal; We organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.


Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.

I'm  working to impact EA fundraising and marketing; see https://bit.ly/eamtt

And projects bridging EA, academia, and open science.. see bit.ly/eaprojects

My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.

Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein

and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)

Twitter: @givingtools

Posts
65

Sorted by New

Sequences
1

Unjournal: Pivotal Questions/Claims project + ~EA-funded research evaluation

Comments
874

Topic contributions
9

Project Idea: 'Cost to save a life' interactive calculator promotion


What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.

 This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator? 


The case 

  1. People might really be interested in this… it’s super-compelling (a bit click-baity, maybe, but the payoff is not click bait)!
  2. May make some news headlines too (it’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!)
  3. if people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this conception… to help us build a reality-based impact-based evidence-based community and society of donors
  4. similarly, it could get people thinking about ‘how to really measure impact’ --> consider EA-aligned evaluations more seriously

While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive  in the way I suggest above, and I doubt  they market it heavily.

GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication).  But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it. 

It could also mesh well with academic-linked research so I may have  some ‘Meta academic support ads’ funds that could work with this.
 

Tags/backlinks (~testing out this new feature) 
@GiveWell  @Giving What We Can
Projects I'd like to see 

EA Projects I'd Like to See 
 Idea: Curated database of quick-win tangible, attributable projects 

A made post about this with some takeaways. TLDR/opinionated: The paper is a strong start but has some substantial limitations and needs more caveating. 
 
I realize the paper is already a few years old. Would like to see and evaluate more recent work in this area. 

Naturally this paper is several years old. My own take" we need more work in this area...  perhaps follow-up work doing a similar survey, taking sample selection and question design more seriously.

 

I hope we can identify & evaluate such work in a timely fashion.

E.g., there is some overlap with

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5021463

which "focuses on measures to mitigate systemic risks associated with general-purpose AI models, rather than addressing the AGI scenario considered in this paper"

You mean it will tend to 'choose' higher valence things? That would seem to make sense for biological systems perhaps, as the feelings and valence would evolve as a reinforcement mechanism to motivate choices that increase fitness.

But I'm not sure why we'd expect it to evolve similarly in a construction like a deep learning AI. No one coded valence in, and no one would really know how to access it even if they wanted to code it in, since we don't really understand where consciousness comes from.

If something in these models are sentient in any way, and if their experience have valence, I don't think we should expect "asking the chat tool what it likes" to be informative of this.

(My thoughts on this are largely the same as when I wrote this short form.) 

Btw the "whom do you hope to convince" was a sincere question, not a rhetorical question. I expect that some people are convinceable even if the change is not immediately seen.

Thinking of things like unjournal evaluation packages (unjournal.pubpub.org) I generally linkpost rather than crosspost because

  • having different versions out there of the same content can be confusing,
  • one might be updated not the others,
  • the conversation at each location may overlap
  • I'm reluctant to make very long posts with a great deal of technical content, cluttering up the EA forum in some way.

It’s generally never an issue of permission or access to the content.

but am I thinking about this wrong? Should I do more cross posting?

well written and I agree with most of the points. But I’m not sure who you hope to convince with this and how you expect to present it to people? Do you have any feedback on that? What have you tried?

Some of the points seem particularly likely to antagonise people who are Trump supporters but have some doubts for example "the idea that we shouldn’t care at all about foreigners dying by the millions is deeply wicked, and one who seriously adopts it has lost part of their soul."

I suspect that when you pose things as taxes, they almost always come out unpopular in polls. If you wanted public support you could instead lead with plant subsidy or plant based protein subsidies and then note that these would be financed by a small tax on harms to animal welfare or something.

In fact probably would do better by framing it in terms of taxing “animal-hours spent in cages”, animal pain, or factory farming in general

I added some quick polls to this post. I'd love to get some feedback to get a sense of whether we're on the right track and foster discussion. I think the polls also highlight some of the key issues and implications.

https://forum.effectivealtruism.org/posts/3Eh8MbqLwFBsD7GK2/how-much-do-plant-based-products-substitute-for-animal

State a probability for the underlying policy question:

What's the probability that investing in improving and promoting plant-based ~meat is the most effective farmed animal welfare intervention (at current margins)?

Rate your agreement over the plausibility of existing methods:

Survey and hypothetical choice-experiments are useful for helping us understand whether plant-based meat substitutes for animal products

Demand estimation from supermarket data is useful for helping us understand whether plant-based meat substitutes for animal products

Comparing this to another prominent approach (rate your agreement)

Plant-based meat is more promising than cell-cultured meat for animal welfare

I added some polls (at top and integrated in relevant sections) to get a pulse-check and foster discussion.

Load more