E

ElliotTep

1972 karmaJoined

Comments
66

I had in mind the EA Animal Welfare fund where small orgs is a reasonable part of its giving portfolio (I don't have exact numbers off the top of my head) 

FWIW the main point I wanted to make in this post is that individuals should not be reaching out to Anthropic staff who don't actively indicate they want to be pitched directly. Part of our strategy is to have a high-trust call to action, but this is mostly based on conversations we've had with Anthropic staff themselves. 

I'm not particularly against newer funds coming on to the scene and agree with a lot of the comments in this post about the pros of doing so.

As a small nitpick some of these major funds do give a lot of smaller donations to smaller, less established organizations, so I wouldn't say major funds = money goes to major orgs.

I interpreted this as the challenge of setting up a foundation with one purpose in mind, and then the people you hire executing something different because of the values they bring to the table. In general, I'd guess that people who work in philanthropic spaces skew left-wing, and so whatever mandate you set will end up skewing more left-wing than you intend (if you yourself are not left-wing). 

Apologies, by that I mean a few Anthropic staff said one thing that was missing from the donor advisor space was recommendations of what % of their donations to allocate across cause areas, so this is something I tried to make happen by advocating for a few other organisations and individuals to do this.

Hi Abraham, I'm curious what you think about the difference between FTX and this situation is that FTX was disbursing hired grantmakers to do the work. My impression is that most Anthropic staff don't have the time or expertise to set this up themselves, even if it was a model like a giving circle, nor do they want to. 

It seems like a challenge here to recreate FTX's level of willingness to fund ambitious projects is that for Anthropic donors, either they'd need to want to spend the time setting up foundations individually, or someone with the right expertise would need to set up their own fund and join the fray on more speculative work. 

FWIW my vague impression (I have less visibility into other cause areas) is that as funds anticipate an influx of funding coming into the space, funding more ambitious and speculative bets seems to be  a part of the conversation (while hopefully reducing the downsides that came with FTX funding). 

Hi Nick, thanks for engaging. I agree that in writing this, there is a level of scrutiny I've opened myself up to. I'll respond to some of the main points:

  1. I agree that everything I've said in this post conveniently aligns with my job. I also have said them not to gatekeep but because I think it is true and has signficant implications for the future of funding in EA.
  2. I endeavour to provide services to Anthropic staff that sit at the intersection of valuable to them AND good for the world. For example, I've spent a fair bit of time advocating for recommended default splits across cause areas based on feedback from a few Anthropic staff. We've also developed resources on some of the main fund options in the animal advocacy space and run an event in SF to ask questions of the fund managers.
  3. The default preference to defer to funds has come from Anthropic staff communicating that for most of them, that's their preference due to lacking the time or expertise. If individuals at Anthropic have wanted to donate to individual organisations, we've been happy to make introductions or specific recommendations.
  4. I agree there is a collective action problem at the level of funds, and how that is navigated is important. I just think that it is a much smaller pool of pitches than at the organisation level. FWIW there have been ongoing efforts among the funders in FAW to coordinate to reduce the collective action problem. 

These posts need warnings that if you have any important work to do in the next hour not to click on them. Watching these videos is too damn tempting! Well done to the team as always! 

Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so? 

Hi Nick, what else did you find compelling from the text? I personally didn't think boycotting chatGPT would have the potential to get the same public appeal as big historical successes, but maybe I'm just too cynical. 

Load more