Hide table of contents
  1. FTX, a big source of EA funding, has imploded.
  2. There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.
  3. There's been a big drop in the funding that EA organisations expect to receive over the next few years.
  4. Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.

Which revisions are most pressing?

37

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

A reread of 'Judicious Ambition' post from not so long ago is interesting:

"In 2013, it made sense for us to work in a poorly-lit basement, eating baguettes and hummus. Now it doesn’t. Frugality is now comparatively less valuable."

So, I guess, bring the hummus back?

Jokes aside, an explosion in funding changed EA from 'hedge fund for charity' into 'VC for charity'. This analogy goes a long way to explain shifts in attitude, decisions, exuberance. So perhaps going back to hedge-fundiness, and shifting the focus back from 'company builders' building the next big thing to less scalable but cost-effective operations is a good direction?

imo EA should have remained frugal.
 

For theoretical reasons, this makes sense. It's incompatible with Singerite alturism to spend money on frivolous luxuries while people are still starving. EAs were supposed to donate their surplus income to GiveWell. This doesn't change when your surplus income grows. At least, not as much as people behaved.

Also for practical reasons. We could've hired double the researchers on half the salary. Okay maybe 1.25x the researchers on 80% the salary. I don't know the optimal point in the workforce-salary tradeoff but EA definitely went too far in the salary direction.

The result was golden handcuffs, grifters, and value drift.

Let's bring back Ascetic EA. Hummus on toast.

As someone who (briefly) worked in VC and cofounded nonprofits, I'm not sure that's a good signal.

"VC for charity" makes more sense when you consider that VC focus on high upside, diversification, lower information and higher uncertainty, which reflects the current stage of the EA movement. EA is still discovering new effective interventions, launching new experimental projects, building capacity of new founders and discovering new ways of doing good on a systemic level. Even today, there's an acknowledgement that we might not know what the most cost-effec... (read more)

EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

  1. Increase the number of donors:
    1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.
    2. Maybe lobby the government for cash?
    3. Maybe lobby OpenAI, DeepMind, etc for cash?
  2. Increase average donation:
    1. Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.
    2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.
    3. Get existing donors to ramp up their donations. In the good ol' days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.
  3. Reduce the number of grants:
    1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.
    2. EA has been engaged in a "hit-or-miss" approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.
    3. The problem with the "sure-thing" approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

      Here's a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.
  4. Reduce average grant:
    1. Maybe EA needs to focus on cheaper projects.
    2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI's decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.
    3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

For Open Philanthropy it is responded in this post:
https://forum.effectivealtruism.org/posts/mCCutDxCavtnhxhBR/some-comments-on-recent-ftx-related-events

For other funders I guess the response will be simmilar.

I guess a short common sense answer for funders could be:
1st) put commitments on hold and wait until there is more clarity of the actual impact
2nd) identify gaps, assess by urgency/importance
3rd) reprioritize and balance portfolios

For workers and organizations relying on donors:
1st) do not assume you will not be impacted if you don't receive directly funds from FTX. The money you were relying on may be redirected in the future to other projects previously funded by FTX
2nd) put on hold financial decisions (hiring staff, buying, etc.) in the short term until you get a bit more clarity
3rd) reorganize your personal/organization budget

Comments3
Sorted by Click to highlight new comments since:

EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

  1. Increase the number of donors:
    1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.
    2. Maybe lobby the government for cash?
    3. Maybe lobby OpenAI, DeepMind, etc for cash?
  2. Increase average donation:
    1. Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.
    2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.
    3. Get existing donors to ramp up their donations. In the good ol' days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.
  3. Reduce the number of grants:
    1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.
    2. EA has been engaged in a "hit-or-miss" approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.
    3. The problem with the "sure-thing" approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

      Here's a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.
  4. Reduce average grant:
    1. Maybe EA needs to focus on cheaper projects.
    2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI's decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.
    3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.

 

It's a process to recruit billionaires/turn EAs into billionaires, but one estimate was another 3.5 EA billionaires by 2027 (written pre FTX implosion). In the analyses I've seen for last dollar cost effectiveness, they have tended to ignore the possibility of EA adding funds over time. Of course we don't want to run out of money just when we need some big surge. But we could spend a lot of money in the next five years and then reevaluate if we have not recruited significant additional assets. This could make a lot of sense for people with short AI timelines (see here for an interesting model) or for people who are worried about the current nuclear risk. But more generally, by doing more things now, we can show concrete results, which I think would be helpful in recruiting additional funds. I may be biased as I head ALLFED, but I think the optimal course of action for the long-term future is to maintain the funding rate that was occurring in 2022, and likely even increase it.

On the grants side of your formula, there are huge differences in flexibility between different projects. The direct cash transfers of Give Directly can scale up and down very rapidly.

On the donors’ side of your formula, it is not only about size but also volatility and reliability. There are big donors with stable wealth and a track record of regular predictable donations.

In my mind a sensible overall allocation would have at least as much money going to very flexible projects (ex: direct cash transfers) as the amount of money coming from very unpredictable sources (ex: one big donor whose wealth coming from risky assets varies a lot every week). This would capture the high rewards of volatile donors, without putting so much uncertainty to the teams who need some stability over the time.

For sure, this is always under the assumption that all donors, big or small, predictable or volatile, meet a minimum ethical standard in their practices.

Curated and popular this week
 ·  · 4m read
 · 
Introduction Although there has been an increase over the last few years in EA work for aquatic animals, there are still significant gaps and challenges in this space. We believe there is a misconception that the existence of new organisations means that the area is 'covered'.  Our purpose in this post is to highlight the gaps and challenges in aquatic animal welfare. We argue that an ecosystem of multiple charities and approaches in the space is needed (including overlapping work on species, countries, and/or interventions). We will also explore some of the challenges that currently hinder the development of this field and offer recommendations within the 'white space' of aquatic animal welfare. Our goal is to initiate a dialogue that will lead to more robust and varied approaches. Why we need more groups working in the aquatic animal space There are not that many people working in this space Animal welfare programs have traditionally been focused on terrestrial species. However, recent years have witnessed a burgeoning interest in aquatic animal welfare within the Effective Altruism community. This could raise the question as to whether we need more charities focusing on aquatic animals, to which we want to argue that we do. Aquatic animals encompass a wide range of species from fish to crustaceans, and are subjects of increasing concern in welfare discussions. Initiatives by various organisations, including our own (Fish Welfare Initiative and Shrimp Welfare Project), have started to address their needs. However, these efforts represent only the tip of the iceberg.  The depth and breadth of aquatic animal welfare are vast, and current interventions barely scratch the surface. For example, while there is growing awareness and some actions by various charities towards the welfare of farmed fishes, the welfare needs and work on invertebrates like shrimps are still in nascent stages. Situations are vastly different between regions, species, and intervention
 ·  · 7m read
 · 
My new book, Altruismo racional, is now on presale. It is my attempt at presenting a compelling case for a particular strand of "classical EA"[1]: one that emphasizes caring deeply about global health and poverty, a rational approach to giving, the importance of cost-effectiveness, and the 🔸10% Pledge. In this post, I provide some context on my reasons for writing this book and what I hope to achieve. If “new EA-themed book in Spanish” was all you needed to know, feel free to skip to How you can help or preorder now. Why write a book Imagine you wake up one morning and discover the world has changed in a few peculiar ways. There has been no 10th anniversary edition of Peter Singer's The Life You Can Save—it was last edited more than a decade ago and has been out of print for years. Will MacAskill has not written Doing Good Better nor any of his pieces for The Guardian. And that’s not all. You ask around about EA and get mostly confused looks. Someone mentions a blog called “Codice Stellare Something” that later changed its name. You look it up but it's written in some foreign language that's hard to understand. “Toby who?” He seems to be associated with something called Geben Was Wir Können that you cannot pronounce, let alone remember. Welcome to Spain—or, I dare say, the Spanish-speaking world—where language friction[2] curbs the potential of most of the ways people first hear about EA. This is true for many other topics, of course. In Spain, people usually don't hear directly from those doing cutting-edge work in the English-speaking world, but rather from local explainers or commentators. Top non-fiction books like Sapiens or Antifragile are read, overwhelmingly, in translation. I have been close to some attempts to translate key EA-themed books into Spanish. The problem? Publishers are quite uninterested because only a handful of English-speaking public intellectuals have the global name recognition to guarantee sales. The Scout Mindset and What We Owe T
 ·  · 12m read
 · 
There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn’t have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says:  "I think my waters have broken".  "Really? It’s probably nothing, let’s just check whether that’s normal."  After a leisurely walk home, and a crash course on premature membrane rupture, we realise that, yes, her waters have definitely broken. We’re about to be among the 7–8% of parents who’ll have a premature baby. We call the hospital. They tell us to come in immediately. One slightly awkward bus journey later, and we’re at the maternity ward. No contractions yet, but the doctors tell us that they might start over the next few days. If they don’t come within the week, they’ll induce labour. They prepare a room, and ask how we want to do this, nudging towards a caesarean. We agree and I head home to prepare things for an imminent arrival. At 7am the next morning, the phone rings: she’s having the baby. With no buses running, I sprint to the hospital, take a wrong turn, and rather heroically scale a three-metre wall to avoid a detour. Bursting through the hospital wards, smelling distinctly of sweat, I find my wife there, in all green and a mesh hat, looking like a nervous child. We’re allowed to exchange an awkward “good luck” with everyone else watching. Hospita