I work on Open Philanthropy’s AI Governance and Policy team, but I’m writing this in my personal capacity – several senior employees at Open Phil have argued with me about this!

This is a brief-ish post addressed to people who are interested in making high-impact donations and are already concerned about potential risks from advanced AI. Ideally such a post would include a case that reducing those risks is an especially important (and sufficiently tractable and neglected) cause area, but I’m skipping that part for time and will just point you to this 80,000 Hours problem profile for now.

  • Contrary to a semi-popular belief that donations in global catastrophic risks merely “funge” with major donors, there are several ways for individual donors, including those giving small amounts, to reduce global catastrophic risks from AI. These include donating to:
    • Work that would be less impactful if they were funded by the major funders, or if it were majority-funded by those funders, or would generally benefit from greater funding diversity for reasons of organizational health and independence.
    • Work that major funders won’t be able to discover, evaluate, and/or fund quickly enough, e.g. time-sensitive events, individual projects, or career transitions.
    • Work that encounters legal restrictions on size of donation, like political campaigns, political action committees/donor networks.
    • Work in sub-areas that major funders have decided not to fund.
  • You can donate to that kind of work either directly (by giving to the organizations or individuals) or indirectly (by giving through funds like the AI Risk Mitigation Fund, the LTFF, Longview’s Emerging Challenges Fund, or JueYan Zhang’s AI Safety Tactical Opportunities Fund.
    • Advantages to giving directly:
      • You can give to political campaigns/PACs/donor networks as well as 501(c)(4) lobbying/advocacy organizations, which the funds might not be able to do, though I’m not sure about all of them. (For political candidates, this probably means not giving in December 2024 and saving for future opportunities.)
      • Some funds might pose reputational issues for some especially reputation-sensitive recipients.
      • You can move especially quickly for things in the “time-sensitive event/project/transition” category.
      • You don’t have to defer to someone else’s judgment (and can help ease the grant evaluation capacity bottleneck!).
    • Advantages to giving indirectly:
      • Giving to the funds, assuming they have 501(c)(3) status or the non-US equivalent, might have more favorable tax implications than giving to individuals or lobbying/advocacy orgs (though I am not a lawyer or accountant and this is not legal/financial advice!)
      • It’s very quick, and you can defer to a professional grantmaker’s judgment rather than spending time/bandwidth on evaluating opportunities yourself.
      • You can give on a more predictable schedule (rather than e.g. saving up for especially good opportunities).
    • (I’ll take this opportunity to flag that the team I work on at Open Philanthropy is eager to work with more external philanthropists to find opportunities that align with their giving preferences, especially if you’re looking to give away $500k/yr or more.)
  • There are some reasons to think that people who work in AI risk reduction especially should make (some/most of) their donations within their field.
    • Because of their professional networks, they are more likely to encounter giving opportunities that funders may not hear about, or hear about in time, or have the capacity to investigate.
    • Because of their expertise, they are better able than most individual donors to evaluate and compare both direct opportunities and the funds.
    • However, people who work in that field may be less inclined to donate within AI risk reduction, perhaps because they want to “hedge” due to moral uncertainty/worldview diversification, to signal their good-faith altruism to others (and/or themselves) by donating to more “classic” cause areas like global health or animal welfare, or to maintain their own morale. I won’t be able to do justice to the rich literature on these points here (and admit to not having really done my homework on it). Instead, I’ll just:
      • Point out that, depending on their budget, donors might be able to do that hedging/signaling with some but not all of their donations. This is basically a call for “goal factoring”: e.g. you could ask how big of a donation would it take to satisfice those goals and donate the rest to AI risk interventions.
      • Throw a couple other points that I haven’t seen discussed in my limited reading of the literature into a footnote.[1]

Edited to add a couple more concrete ideas for where to donate:

  • For donors looking to make a fast, relatively robust, and tax-deductible donation, Epoch is a great option.
    • I think their research has significantly improved the evidence base and discourse around the trajectory of AI, which seem like really important inputs to how society handles the attendant risks.
    • According to a conveniently timed thread from Epoch's founder Jaime Sevilla today, marginal small-dollar funding would go towards additional data insights and short reports, which sounds good to me.
    • Jaime adds that they are "starved for feedback" and that a short email about why you're supporting them would be especially useful (though I think "Trevor's forum post said so" would be less helpful than what he has in mind -- bolstering my claim that AI professionals are comparatively advantaged to donate!).
  • I also have some personal-capacity opinions about policy advocacy and political campaigns and would be happy to chat about these privately if you reach out to my Forum account, but won't have the time to chat with everyone, so please only do so if you're planning to give away ~$25k or more in the next couple years.
  1. ^

    First, a meta point: I think people sometimes accept the above considerations “on vibes.” But, for people who agree that reducing AI risks is the most pressing cause (as in, the most important, neglected, and tractable) and with my earlier argument that there are good giving opportunities in AI risk reduction at current margins, especially for people who work in that field, their views imply that their donation is a decision with nontrivial stakes. They might actually be giving up a lot of prima facie impact in exchange for more worldview diversification, signaling, and morale. I know this does not address the above considerations, and it could still be a good trade; I’m basically just saying, those considerations have to turn out to be valid and pretty significant in order to outweigh the consequentialist advantages of AI risk donations.

    Second, I think it’s coherent for individual people to be uncertain that AI risk is the best thing to focus on (on both empirical and normative levels) while still thinking it’s better to specialize, including in one’s donations. That’s because worldview diversification seems to me like it makes more sense at larger scales, like the EA movement or Open Philanthropy’s budget, and less at the scale of individuals and small donors. Consider the limits in either direction: it seems unlikely that individuals should work multiple part-time jobs in different cause areas instead of picking one in which to develop expertise and networks, and it seems like a terrible idea for all of society to dedicate their resources to a single problem. There’s some point in between where the costs of scaling an effort, and the diminishing returns of more resources thrown at the problem, start to outweigh the benefits of specialization. I think individuals are probably on the “focus on one thing” side of that point.

118

12
0

Reactions

12
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I think this post makes some great points, thanks for sharing! :) And I think it's especially helpful to hear from your perspective as someone who does grantmaking at OP.

I really appreciate the addition of concrete examples. In fact, I would love to hear more examples if you have time — since you do this kind of research as your job I'm sure you have valuable insights to share, and I expect that you can shift the donations of readers. I'd also be curious to hear where you personally donate, but no pressure, I totally understand if you'd prefer to keep that private.

Work in sub-areas that major funders have decided not to fund

I feel like this is an important point. Do you have any specific AI risk reduction sub-areas in mind?

Thanks, glad to hear it's helpful!

  • Re: more examples, I co-sign all of my teammates' AI examples here -- they're basically what I would've said. I'd probably add Tarbell as well.
  • Re: my personal donations, I'm saving for a bigger donation later; I encounter enough examples of very good stuff that Open Phil and other funders can't fund, or can't fund quickly enough, that I think there are good odds that I'll be able to make a really impactful five-figure donation over the next few years. If I were giving this year, I probably would've gone the route of political campaigns/PACs.
  • Re: sub-areas, there are some forms of policy advocacy and moral patienthood research for which small-to-medium-size donors could be very helpful. I don't have specific opportunities in mind that I feel like I can make a convincing public pitch for, but people can reach out if they're interested.

Adding to the list of funds: Effektiv-spenden.org recently launched their AI safety fund.

If you are so inclined, individual donors can make a big difference to PauseAI US as well (more here: https://forum.effectivealtruism.org/posts/YWyntpDpZx6HoaXGT/please-vote-for-pauseai-us-in-the-donation-election)

We’re the highest voted AI risk contender in the donation election, so vote for us while there’s still time!

Seems like a good place to remind people of the Nonlinear Network, where donors can see a ton of AI safety projects with room for funding, see what experts think of different applications, sort by votes and intervention, etc. 

More from tlevin
Curated and popular this week
Relevant opportunities