Hide table of contents

Recently, I outlined a new framework and model that goes beyond marginal, ITN-style thinking by accounting for the optimal size of funding. This framework was originally developed to help major donors and impact investors with their prioritization decisions. That said, the associated ideas may be useful to any donor. This post highlights one of these ideas.

The idea is that you should choose opportunities for further investigation based both on your current assessments of marginal effectiveness and how much your assessments might change with further investigation. Because, if an opportunity turns out to be great then you can put lots of funding into it. That is, there is 'value of information'.

I think this is intuitive. It also follows from other heuristics, like 'explore and exploit'. Thus, many people in the EA community may already be acting in alignment with this idea. 

However, I think there has been so much emphasis on marginal, ITN-style thinking in the EA community that it is worth highlighting this idea. 'Value of information' doesn't have to be a separate consideration or heuristic - it arises simply when you take the first step towards non-marginal thinking by also considering funding size.

To make this clear, suppose you are trying to choose which high-level opportunities to prioritize (e.g. cause areas, or even charities within an area). You have an initial assessment, , of the marginal impact per dollar, , of an opportunity. Perhaps from doing ITN-style assessments or direct cost-effectiveness analyses. You also expect that if you conduct further analyses then your assessment of  will update with zero average expected change and standard deviation .

If you only prioritize based on , then there is zero value in doing further analysis because the expected value of  after doing that work is still .

However, all else equal, you will want to put more funding into opportunities with higher . Conceptually, think of this as the optimal funding size depending linearly on . Then the total impact value you expect to generate with an opportunity will depend on the product of marginal impact per dollar and funding size: 

So, you should prioritize opportunities based on their expected value after further analysis . The value of information is .

This can result in some opportunities with low  but high  being top priorities.

One potential example of this is (was) climate change. Over the past several years, I would say that climate has gone from being a relatively neglected cause area in EA (e.g. see this post) to being an important part of the EA funding landscape. 

The big strike against climate in pure ITN-style framing is that it isn't neglected because almost everyone is aware of the issue and tons of funders are putting money towards it. However, it's such a big, varied topic that it seems reasonable to expect high  from additional research. Given this, and with the benefit of hindsight, it seems correct that many EAs looked into climate (and continue to do so).

What other examples can you think of?

23

0
0

Reactions

0
0
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg