Hide table of contents

In a situation with high upside and high uncertainty, how should an altruist distribute their time and resources to different causes? Is it better to focus on a single cause, or spread attention across causes?

Considering these issues, Open Philanthropy has endorsed a hits-based giving approach:

We suspect that high-risk, high-reward philanthropy could be described as a “hits business,” where a small number of enormous successes account for a large share of the total impact — and compensate for a large number of failed projects.

Despite the high uncertainty that necessitates this approach, might we be able to say something precise about how to distribute a charitable portfolio? How does hits-based giving change how an altruist should act?

Here, I propose a simple model of hits-based giving. Specifically, I model an altruist who can fund a certain number of grants across different cause areas. Each grant awarded in a cause area is treated as an independent draw from from an exponential distribution, and the funder is assumed to care solely about the most successful grant in each cause area. As we will see, this creates diminishing returns to further awards within a single cause, leading the funder to spread their grants across areas.

Notation

The funder is considering different cause areas indexed by , each with their own impact distribution :

The funder must choose how many grants to award in each cause area.

denotes the maximum draw from a set of samples:

The Model

The funder only cares about the most successful project in each cause area

Their utility function within a cause is:

Across all causes, the funder adds up the utility of the most successful projects:

This is the key difference from marginal impact thinking. We are ignoring average impact and instead focusing on finding black swans. This also allows us to simplify the math by focusing on the maximum draw from a distribution. This “hits-based” assumption works best for fat tailed distributions where extreme tail events dominate the expected value calculation. This is not an accurate characterization of what organizations like OpenPhil endorse, but attempts to capture the spirit of hits-based altruism.

Each grant is an independent draw from the impact distribution

We assume that within a cause area, the funder is supporting disconnected projects with the same after-the-fact impact distribution. Specifically, impacts are drawn i.i.d. from the distribution. This assumption works best when the results of different projects are uncorrelated, such as when funding independent researchers. Each distribution has an expected value .

The impact distribution is represented by an exponential distribution

Within a cause area, grants have an after-the-fact impact drawn from an exponential distribution. Specifically, each is distributed according to an exponential distribution with rate parameter $r_i = 1/v_i $:

The exponential distribution has nice properties for this model. It has a simple formula for order statistics, impact is always non-negative, and the distribution has a long tail. Additionally, it’s at least possible that real life impact distributions might be described by an exponential distribution.

The General Case

Now we can determine the optimal allocation of grants across cause areas.

The distribution of the maximum of draws in this case is:

Where are i.i.d. standard exponential distributions. Taking the expectation:

The sum goes from to , meaning that the denominators range from to , so we can simplify the sum:

Where is the -th harmonic number. We can use the Taylor series approximation for (notice that we implicitly assume is reasonably large for this approximation to be accurate):

Since the funder only cares about the maximum draw, this is equivalent to the utility they receive from awarding grants in area .

To optimally distribute their grants, the funder must ensure that the value of the marginal grant is equal across causes:

Lets set the marginal utilities equal for two representative causes and :

Solving for in terms of :

Now that we have the number of grants relative to each other, we can compute what fraction of the grants should go to cause :

The funding a cause receives is proportional to its expected value.

A Numerical Example

A funder is considering awarding grants between three cause areas. Funding a grant in cause has an expected value of QALY’s (meaning that ), a grant in cause has an expected value of QALY’s ( ), and a grant in cause has an expected value of QALY’s ( ). Using the budgeting formula from before:

So the funder awards grants to cause , to cause , and to cause .

Contrast this with a funder focused on the expected value of each donation. In this case, all grants would go to cause . At the other extreme, a maximally uncertain funder might give grants to each cause.

Possible Extensions

There are several ways to extend this approach. It may be useful to consider other impact distributions, grants with different costs, or altruists who care about other order statistics. Alternatively, relaxing the i.i.d assumption could help model grants in areas where projects are complimentary to each other.

Notice that it is possible to include saving for future donation or personal consumption as “cause areas” within the budget, enabling one to consider a richer decision problem. When some causes are expected to become more valuable in the future, altruists will save more to spend on them later.

Ideally, this approach would be extended to allow for impact distributions to change over time. For example, learning more about a cause induces a change in the impact distribution. Investing in pilot projects within a cause area can help funders get a better sense of the impact distribution, providing valuable information. This becomes especially important when searching for new causes to fund.

It’s also important to take into account the choices of other donors. A cause that other donors are pouring money into is unlikely to benefit much from further funding. This gets especially interesting when considering donors time preferences. What does patient philanthropy look like under hits-based giving?

The analysis here assumes that each cause can be compared on a common scale like QALY’s. But funders may have multiple objectives they want to satisfy (e.g. QALY’s and x-risk reduction), and may need to consider trading off between two objectives.

Discussion

The hits-based approach is quite different from the case where we only consider the marginal impact of donations. In that case, altruists do best by donating to a single cause, in stark contrast to today’s giving.

This model seems particularly useful for microgrants programs, where uncertainty is high, typical returns for a project are low, and the goal is to find a highly valuable cause that traditional grantmaking organizations can scale up.

Though a grantmaking analogy was used to motivate the model, I think the same ideas could be productively applied to other areas such as innovation, charitable spending, and time investment.

The hits-based approach could be applied to s-process funding, allowing evaluators to specify the expected value of a grant in each area. Alternatively, charity prediction markets could be used to determine the expected value of different grants.

Charitable giving presents a rich decision problem with significant real-world implications. Studying the nuances of maximizing impact may improve funding decisions. I hope that further work in this area can improve how altruists spread their resources across causes.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of