I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.
Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have decided to exit a handful of sub-causes (amounting to less than 5% of our annual grantmaking), and we are no longer planning to expand into new causes in the near term by default.”
A few follow-ups on this from an Open Phil perspective:
- I want to apologize to directly affected grantees (who've already been notified) for the negative surprise here, and for our part in not better anticipating it.
- While this represents a real update, we remain deeply aligned with Good Ventures (they’re expecting to continue to increase giving via OP over time), and grateful for how many of the diverse funding opportunities we’ve recommended that they’ve been willing to tackle.
- An example of a new potential focus area that OP staff had been interested in exploring that Good Ventures is not planning to fund is research on the potential moral patienthood of digital minds. If any readers are interested in funding opportunities in that space, please reach out.
- Good Ventures has told us they don’t plan to exit any overall focus areas in the near term. But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk. I think it’s important to diversify funding in many of the fields Good Ventures currently funds, and that doing so could make the funding base more stable both directly (by diversifying funding sources) and indirectly (by lowering the time and energy costs to Good Ventures from being such a disproportionately large funder).
- Another implication of these changes is that going forward, OP will have a higher bar for recommending grants that could draw on limited Good Ventures bandwidth, and so our program staff will face more constraints in terms of what they’re able to fund. We always knew we weren’t funding every worthy thing out there, but that will be even more true going forward. Accordingly, we expect marginal opportunities for other funders to look stronger going forward.
- Historically, OP has been focused on finding enough outstanding giving opportunities to hit Good Ventures’ spending targets, with a long-term vision that once we had hit those targets, we’d expand our work to support other donors seeking to maximize their impact. We’d already gotten a lot closer to GV’s spending targets over the last couple of years, but this update has accelerated our timeline for investing more in partnerships and advising other philanthropists. If you’re interested, please consider applying or referring candidates to lead our new partnerships function. And if you happen to be a philanthropist looking for advice on how to invest >$1M/year in new cause areas, please get in touch.
I don't know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world's most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one's values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one's intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I've picked up through the grapevine, you did not implement something that is best described as "reducing the number of core priority areas" but instead is better described as "blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have". That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.