J

jackva

Climate Research Lead @ Founders Pledge
3147 karmaJoined Apr 2018Working (6-15 years)

Comments
246

but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.

I think this is fundamentally the crux -- many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.

Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.

Yeah, that's true, though in Luke's treatment both are discussed and described as roughly equal -- there's no indication given that either should be more promising on priors and, as you say, they will often overlap.

(Last comment from me on this for time reasons)

  • I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
  • The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
  • If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.

I am fairly convinced your basic point will stand, but it seems important to not overplay the degree to which nuclear risk is not neglected,  and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
 

I can't open the GDoc on AI safety research.

But, in any case, I do not think this works, because philanthropic, private, and government dollars are not fungible, as all groups have different advantages and things they can and cannot do.

If looking at all resources, then 80M for AI safety research also seems an underestimate as this presumably does not include the safety and alignment work at companies?
 

Nuclear risk philanthropy is about 30M/y, it seems you are comparing overall nuclear risk effort to philanthropic effort for AI?

In terms of philanthropic effort AI risk strongly dominates nuclear risk reduction.

Sorry for not being super clear in my comment, it was hastily written. Let me try to correct:

I agree with your point that we might not need to invest in govt "do something" under your assumptions (your (1)).

I think the point I disagree with is the implicit suggestion that we are doing much of what would be covered by (1). I think your view is already the default view. 

  • In my perception, when I look at what we as a community are funding and staffing, > 90% of this is only about (2) -- think tanks and other Beltway type work that is focused on make actors do the right thing, not just salience raising, or, alternatively having these clear conversations.
  • Somewhat casually but to make the point, I think your argument would change more if Pause AI sat on 100m to organize AI protests, but we would not fund CSET/FLI/GovAI etc.
  • Note that even saying "AI risk is something we should think about as an existential risk" is more about "what to do" than "do something", it is saying "now that there is this attention to AI driven by ChatGPT, let us make sure that AI policy is not only framed as, say, consumer protection or a misinformation in elections problem, but also as an existential risk issue of the highest importance."


This is more of an aside, but I think by default we err on the side of too much of "not getting involved deeply into policy, being afraid to make mistakes" and this itself seems very risky to me. Even if we have until 2030 until really critical decisions are to be made, the policy and relationships built now will shape what we can do then (this was laid out more eloquently by Ezra Klein in his AI risk 80k podcast).

 

Thanks, Jamie! Indeed quite helpful to know that there's nothing obvious I am missing.

Yes, agree on the last point -- I am just surprised this has not been done as EA grant makers frequently face the decision, I think.

Is there a process for more time-sensitive grants (where a decision would be needed earlier)?

This seems right to me on labs (conditional on your view being correct), but I am wondering about the government piece -- it is clear and unavoidable that government will intervene (indeed, already is) and that AI policy will emerge as a field between now and 2030 and that decisions early on likely have long-lasting effects. So wouldn't it be extremely important also on your view to now affect how government acts?

Thanks, good shout!

From what I've seen, their work does not quite fit what I am looking for -- they are not comparative and they are also more narrowly focused on left-leaning protest movements, which is more narrow than what I am trying to get at here.

Load more