If you know people who could do good work in the space, please point them to our RFP! As for being anti-helpful in some cases, I'm guessing that was cases where we thought the opportunity wasn't a great opportunity despite it being right-of-center (which is a point in favor, in my opinion), but I'm not sure.
Replying to just a few points…
I agree about tabooing "OP is funding…"; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.
Re: "large negative incentive for founders and organizations who are considering working more with the political right." I'll note that we've consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that "engages with the right" (your phrasing), e.g. Horizon fellows and many other GV grantees regularly engage with Republicans, and seem likely to do even more of that on the margin given the incoming GOP trifecta.
Re: "nothing has changed in the last year." No, a lot has changed, but my quick-take post wasn't about "what has changed," it was about "correcting some misconceptions I'm encountering."
Re: "De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing." This isn't true, including specifically for my team ("AI governance and policy").
I also don't think this was ever true: "One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV." There's plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
Good Ventures did indicate to us some time ago that they don't think they're the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment's claim about an aversion to opportunities that are "even slightly right of center in any policy work," (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don't actually think this should affect people's actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is "available opportunities that look high-impact from an AI GCR perspective," not "available funding." If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can't guarantee we'll think it's promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we've simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
Re: why our current rate of spending on AI safety is "low." At least for now, the main reason is lack of staff capacity! We're putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we'd like. If you want our AI safety spending to grow faster, please encourage people to apply!
I'll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.
Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn't approve grants. We all still collaborate to some degree, but new hires shouldn't e.g. expect to work closely with Holden.
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is "yes." In general, I really did mean the "tentative" in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren't very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of EA, or all of Open Phil, and this post is my personal view rather than an institutional one since no single institutional view exists, but for the record, my inside view since 2010 has been "If anyone builds superintelligence under anything close to current conditions, probably everyone dies (or is severely disempowered)," and I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. (My all-things-considered view, which includes various reference classes and partial deference to many others who think about the topic, is more agnostic and hasn't consistently been above the "probably" line.)
Moreover, I think those who believe some version of "If anyone builds superintelligence, everyone dies" should be encouraged to make their arguments loudly and repeatedly; the greatest barrier to actually-risk-mitigating action right now is the lack of political will.
That said, I think people should keep in mind that: