Matt Putz

Program Associate @ Open Philanthropy
812 karmaJoined Berkeley, CA, USA

Comments
66

I work at Open Philanthropy, and I recently let Gavin know that Open Phil is planning to recommend a grant of $5k to Arb for the second project on your list: Overview of AI Safety in 2024 (they had already raised ~$10k by the time we came across it). Thanks for writing this post Austin — it brought the funding opportunity to our attention.

Like other commenters on Manifund, I believe this kind of overview is a valuable reference for the field, especially for newcomers. 

I wanted to flag that this project would have been eligible for our RFP for work that builds capacity to address risks from transformative AI. I worry that not all potential applicants are aware of the RFP or its scope, so I’ll take this opportunity to mention that this RFP’s scope is quite broad, including funding for: 

  • Training and mentorship programs
  • Events
  • Groups
  • Resources, media, and communications
  • Almost any other type of project that builds capacity for advanced AI risks (in the sense of increasing the number of careers devoted to these problems, supporting people doing this work, and sharing knowledge related to this work). 

More details at the link above. People might also find this page helpful, which lists all currently open application programs at Open Phil. 

Can you say more about the 20% per year discount rate for community building? 

In particular, is the figure meant to refer to time or money? I.e. does it mean that

  1. you would trade at most 0.8 marginal hours spent on community building in 2024 for 1 marginal hour in 2023?
  2. you would trade at most 0.8 marginal dollars spent on community building in 2024 for 1 marginal dollar spent on community building in 2023? 
  3. something else? (possibly not referring to marginal resources?)

(For money a 20% discount rate seems very high to me, barring very short timelines or something similar. It would presumably imply that you think Open Phil should be spending much more on community building until the marginal dollar doesn't have such high returns anymore?)

Minor nitpick: 

I would've found it more helpful to see Haydn's and Esben's judgments listed separately.

Need is a very strong word so I'm voting no. Might sometimes be marginally advantageous though.

Thanks for writing this up! Was gonna apply anyway, but a post like this might have gotten me to apply last year (which I didn't, but which would've been smart). It also contained some useful sections that I didn't know about yet!

This is so useful! I love this kind of post and will buy many things from this one in particular.

Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).

Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?

This seems really exciting!

I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.

So I think conditional on thinking this is a good idea at all, this may be an unusually good funding opportunity for smaller earning-to-givers. Unfortunately, the flip-side is that fundraising for this may be somewhat harder than for other EA projects.

Load more