I'm the Founder and Co-director of The Unjournal; We organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.
I'm working to impact EA fundraising and marketing; see https://bit.ly/eamtt
And projects bridging EA, academia, and open science.. see bit.ly/eaprojects
My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.
Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein
and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)
Twitter: @givingtools
I’d add a question around how we can infer the sign of 'how things affect the valence of digitial minds' ... and otherwise, how can digital-mind welfare can be action-guiding at all?
You discuss nearby issues: whether digital minds will be happy by default, whether we can communicate with AIs about preferences, whether we can promise them things positive for wellbeing, and whether self-modification/freedom helps. But I don’t think this fully addresses the deeper crux: even conditional on some part of an AI system having conscious valenced experience, how would we know what makes that experience better rather than worse?
As I suggested in The "talker–feeler gap": AI valence may be unknowable, there may be a “talker–feeler gap”:
A. The part of the system we instruct, bargain with, or ask about preferences may not be the part, if any, that has valenced experience. Or it may not have reliable epistemic access to the welfare-relevant states. This isn't a deception problem. Even a perfectly “honest” reporting subsystem might not know whether the conscious subsystem is made better or worse off. And its reports may track training objectives, conversational incentives, or preferences rather than welfare.
B. Even if there is valence and the 'decisionmaker' can detect it, the system may be optimized or constrained to act in ways that don't track its own valence. This may be fundamentally baked into the training and development and hard to adjust.
Either A or B would also make typically proposed solutions less clearly beneficial and even potentially harmful. If it doesn't have access to the part of the system having balanced experience, asking it about this will not tell us much. And “give them freedom / let them do what they want / avoid what makes them uncomfortable” won't lead to better outcomes if the "decisionmaker in the system" doesn't optimize for the "feeler's welfare." (And it seems as plausible to me as anything else, that having freedom of choice might be painful for the valenced part of a complex system.)
So I’d suggest adding something like: "Can we ever get reliable, action-guiding evidence about the sign and magnitude of digital-mind valence and how it responds to different requests and outcomes?" Without a bridge from computation, preferences, or self-report to valence, it’s unclear whether potential AI welfare interventions actually improve welfare rather than merely satisfying some behavioral or optimization proxy.
The recent forecasting is overrated post got me thinking:
Solution Seeking a Problem
When talking about forecasting, people often ask questions like “How can we leverage forecasting into better decisions?” This is the wrong way to go about solving problems.
Intuitively, that seems correct, and I've relied on the expression "when you have a hammer, everything looks like a nail." This got me thinking: is it necessarily the wrong way, or is this a truism?
If I have a legitimately useful and powerful tool, isn't it indeed valuable to look around for problems that it can help solve? E.g., if we have discovered a way to harness electricity, shouldn't think about the ways it can be used to improve communication, build labor-saving devices, power factories, etc? If we have something that has demonstrated potential to generate reliable information (supposing that forecasting could do this) shouldn't we look for fruitful opportunities to apply it?
With a set of tools and a set of problems, why is it more useful for one side to do the searching than the other? (Sorry, maybe this is getting too meta and belongs in its own shortform?)
Solution Seeking a Problem
When talking about forecasting, people often ask questions like “How can we leverage forecasting into better decisions?” This is the wrong way to go about solving problems.
I'm reconsidering this point. It seems intuitive, but what is the strongest argument that this is "wrong"?
(Caveat: Slightly self-promoting, sorry, but I hope it's germane/helpful.) By the way, on the animal welfare forecasting front, see Support Metaculus' First Animal-Focused Forecasting Tournament and Rethinking the Future of Cultured Meat: An Unjournal Evaluation. I'd leave room for some doubt as to whether the "clean meat forecasting" work led to updates in the right direction.
We're trying to take the next steps on this with a workshop involving some belief elicitation and forecasting (workshop page, belief elicitation page).
I started something in this direction here.
But I am also a bit skeptical that creating lots of unsubsidized markets would generate much positive information. My evidence/experience suggests that most people who are involved in these matters, don't want to do a substantial amount of research into this sort of very nuanced, detailed questions that are the highest value, so the small amount of predictions you get might just be noise.
My belief/experience suggests that the sorts of prediction markets that are profitable and entertaining are not likely to be the ones that are going to be particularly informative to globally impactful/EA funding and policy choices.
(But at the same time, I guess it's the case that some of the EA/rationalist support for prediction platforms has ended up leading to these entertaining but not socially valuable things.)
I think "your mileage may vary" quite a lot in this. In the context of the social science prediction market, you tend to be asking people who have expertise and familiarity with the methods and context, and sometimes more experienced than the people posting the questions.
On the other hand, if you post detailed technical questions on a mainstream prediction market or even on Metaculus, I expect / have the sense that you don't get much of this' wisdom of the crowds dividend'.
Fair, but cultivating tools used for prediction markets is only a part of this forecasting research funding. And the sorts of questions that EAs want to get predictions over (e.g., number of chickens and cages per year with versus without the production of cell cultured meat) are unlikely to be part of a popular mainstream prediction market.
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects