Inspired by Matt Yglesias[1]

  • For $5,000, you can save one child from malaria.[2]
  • For $200, you can have a one-trillionth chance of preventing an existential catastrophe from pandemics.[3]
  • For $5,000–$10,000, you can fund events, books, food, t-shirts, etc. for a small EA group for a year.
  • For $10,000, you can fund the marginal AI safety researcher for a month.
  • For $1, you can invest it to double its expected influence over the world in less than 20 years, assuming nothing crazy happens before then.[4]

Let's say money is part of "EA funding" if the person/system directing it is roughly aiming at doing as much good as possible and considering options like these. Then marginal EA funding goes to interventions that the person/system directing it believes are at least as good as interventions like these. These interventions are really good. Therefore marginal EA funding is prima facie really good.

As long as there exist cost-effective interventions to throw money at, EA is funding constrained.

  1. ^

    "It feels like there are three pieces per week on EA Forum with the thesis that an increase in EA funding could be counterintuitively bad and nobody ever [writes] a post with the boring but more correct-sounding thesis that it’s good. I guess my slightly spicy EA take is that there's too much complacency about not being funding constrained, and it would actually be really useful to raise dramatically more money."

  2. ^

    Actually, I can't find GiveWell's marginal cost-effectiveness estimates, but my sense is that they've found interventions with average cost-effectiveness better than $4,500 per child saved and that scale without much cost increase. [Update: see comments.]

  3. ^

    Open Philanthropy's last dollar project.

  4. ^

    Assuming you can get at least 8% expected real returns per year, and the wealth of the rest of the world grows at at most 4%, and influence is proportional to your share of the world's wealth.

46

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

my sense is that they've found interventions with average cost-effectiveness better than $4,500 per child saved and that scale without much cost increase.

 

Seems plausible, but on the other hand GiveWell decided to hold on to money instead of spending it immediately, apparently because of local scaling limits:

In 2021, we may need to direct as much as $560 million. While we have an excellent team of 22 researchers working on this full time, we haven’t been able to hire quickly enough to match our incredible growth in funds raised.

This year, we expect to identify $400 million in 8x or better opportunities. If our fundraising projections hold, we may have $160 million (or more) that we’re unable to spend at our current bar.

FWIW: GiveWell actually already had some opportunities in the pipeline that they were still working on (e.g. Dispensers for Safe Water). Given the funding needs of their top charities right now it's looking very likely they'll have more room for funding than they can fill this year (unless there's unprecedented growth which seems unlikely given current projected economic conditions). At the GiveDirectly bar of funding (10%-30% as cost effective) there's nowhere near enough funding for the foreseeable future.

Yeah, I don't know much about this; if someone has a good justification for the marginal cost-effectiveness of global health & development interventions, I'd love to see it.

Update: GiveWell funds some interventions at more like $10K/life, which naively suggests that marginal cost per life is about $10K, but maybe those interventions had side effects of gaining information or enabling other interventions in the future and so had greater all-things considered effectiveness.

And:

  • For $10,000,000,000,000, you can buy most of the major tech companies and semiconductor manufacturers.

That would really, really help us make AI go well. Until we can do that, more funding is astronomically valuable. (And $10T is more than 100 times what EA has.)

Do you know of any estimates of the impact of more funding for AI safety? For instance, how much would an additional $1,000 increase the odds of the AI control problem being solved?

I don't know of particular estimates. I do know that different (smart, reasonable, well-informed) people would give very different answers -- at least one would even say that the marginal AI safety researcher has negative expected value.

Personally, I'm optimistic that even if you're skeptical of AI safety research in general, you can get positive expected value by (as a lower bound) doing something like giving particular researchers whose judgment you trust money to support researchers they think are promising.

My guess is that the typical AI-concerned community leader would say at least a one in 10 billion chance for $1,000.

Curated and popular this week
Relevant opportunities