Hide table of contents

This is a linkpost for https://www.rhyslindmark.com/ftx-future-fund/. 

Warning: Lots of napkin math below. Lending y'all an Idea That Is Not Yet Fully Formed™. But wanted to share so you get a rough map of longtermist funding.


My org is writing a grant application for FTX Future Fund's first grant round. (You should too! Apply by March 21.)

As part of that, I wanted to research how important FTX Future Fund is for the longtermist ecosystem more generally.

In summary: It's quite important! Let's learn why.

I. EA Funding Right Now

First, let's look at EA funding over time.

Of all Effective Altruist (EA) funding, 20% comes from GiveWell and 60% comes from Open Philanthropy (Open Phil).

In 2019, here's how much each org processed:

https://forum.effectivealtruism.org/posts/nws5pai9AB6dCQqxq/how-are-resources-in-ea-allocated-across-issues

What about GiveWell's giving over time? Their graph is below.

They processed only $2M per year in the 2000s, then started to grow from $10M to $100M per year throughout the 2010s.

https://blog.givewell.org/2021/05/11/early-signs-show-that-you-gave-more-in-2020-than-2019-thank-you/ (this doesn't include Open Phil)

And here's Open Phil's estimate of how much they've given per year:

So, taking GiveWell and Open Phil together, here's how much EA money has been given per year throughout the 2020s:

$400M, not bad.

But this is actually going to ramp up a bunch in the coming few years. Open Phil only regranted $100M to GiveWell in 2020, but they plan to grant GiveWell $300M in 2021, $500M in 2022, and $500M again in 2023.

So how much will Open Phil be granting total?

Based on 2021 data, GiveWell granting is roughly 50% of Open Phil's budget:

So by increasing their 2022/2023 GiveWell giving to $500M, we'd roughly expect Open Phil to give $1B by that time:

GiveWell itself wants to direct $1B by 2025. If we take all of these together:

  • $$ from Open Phil to GiveWell
  • $$ from Open Phil to not GiveWell
  • $$ to GiveWell from not Open Phil
  • Other Grantmaking

The growth of EA giving into 2025 looks like this:

In other words, we're just at the start of EA funders giving a lot more money.

Still, most of EA granting lies with Open Phil and GiveWell. And much of that is still in Global Health.

...Until now!

II. FTX Future Fund and Longtermism

Meanwhile, Sam Bankman-Fried has been making magic internet money.

He's starting to give it back, mostly towards longtermism. How much of an impact is it having?

We can start by looking at how much money is in longtermism now.

Let's start with Ben Todd's excellent overview of 2019 EA granting categories, which I've slightly modified.

https://forum.effectivealtruism.org/posts/nws5pai9AB6dCQqxq/how-are-resources-in-ea-allocated-across-issues

As you can see, longtermism (in red) is roughly 30% of all EA giving. In 2021, it was roughly 15% of Open Phil giving.

So, assuming roughly 20% of Open Phil's giving is longtermist, and assuming other longtermist donors are roughly 20% of Open Phil's longtermist giving, here's what longtermist giving looks like until now:

This is good! It's a reflection of the EA ecosystem accounting for the idea that ~future lives matter.

But FTX Future Fund is about to drastically increase it even more. They're trying to give $100M in 2022 alone. Here's what the graph will look like going forward:

That's a big yellow jump! It makes longtermist giving look like this for 2022:

But even this assumes that Open Phil is going to 2x their longtermist grantmaking in a similar fashion as they're pumping money into GiveWell.

If they keep their longtermist grantmaking at current levels, around $100M, the 2022 pie chart looks like this:

So, yes, the FTX Future Fund is a big deal for the longtermist funding ecosystem.

The EA funding ecosystem has had a shift. Dustin Moskovitz was a Web2 Facebook Money. SBF is Web3 FTX Money.

This means we should add a new player, FTX, (in green!) to our overall EA giving graph below.

Hope this helps give context to FTX's longtermist grantmaking.

Thanks for reading and don't forget to apply for that sweet sweet cash from FTX Future Fund by March 21.

Notes:

  • Not quite sure why some numbers don't add up. 1) Ben Todd averaged 2017-2019 to get $260M. I can't quite tell how much Open Phil themselves say they gave in 2019. They just say "over $200M". 2) The graph here shows that Give Well raised $91M from Open Phil in 2020. But then Open Phil says they granted $100M. I'm working with public data and doing napkin math so ¯\_(ツ)_/¯
  • For more on why Open Phil is giving more to Give Well, see this post. Although at the top they emphasize: This post is unusually technical relative to our others, and we expect it may make sense for most of our usual blog readers to skip it. 😂
  • As a reminder, other big crypto EA funders include Vitalik and Ben Delo.
Comments6


Sorted by Click to highlight new comments since:

This format is amazing. More please.

+1 -- love it Rhys; more memes pls

Yitz
10
0
0

Do we know how much impact Sam Bankman-Fried‘s personal philosophy is going to have on FTX’s grant-making choices? This is a lot of financial power for a single organization to have, so I expect the makeup of the core team to have an outsized effect on the rest of the movement.

Valid question!

This appeals strongly to Millennials and Zoomers. Love it. Also, still a format that brings the message across. Thanks!

Awesome post. I'd just add that you've reported a lower bound. Per https://ftxfuturefund.org/announcing-the-future-fund:

We plan to distribute at least $100M this year, and potentially a lot more, depending on how many outstanding opportunities we find. In principle, we’d be able to deploy up to $1B this year.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while