This is a special post for quick takes by lukeprog. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."

Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Obviously I can't speak for all of EA, or all of Open Phil, and this post is my personal view rather than an institutional one since no single institutional view exists, but for the record, my inside view since 2010 has been "If anyone builds superintelligence under anything close to current conditions, probably everyone dies (or is severely disempowered)," and I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. (My all-things-considered view, which includes various reference classes and partial deference to many others who think about the topic, is more agnostic and hasn't consistently been above the "probably" line.)

Moreover, I think those who believe some version of "If anyone builds superintelligence, everyone dies" should be encouraged to make their arguments loudly and repeatedly; the greatest barrier to actually-risk-mitigating action right now is the lack of political will.

That said, I think people should keep in mind that:

  • Public argumentation can only get us so far when the evidence for the risks and their mitigations is this unclear, when AI has automated so little of the economy, when AI failures have led to so few deaths, etc.
  • Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work — comes from dozens of people toiling away mostly behind-the-scenes for years, not from splashy public communications (though many of the people involved were influenced by AI risk writings years before). Public argumentation is a small portion of the needed work to make concrete progress. It may be necessary, but it’s far from sufficient.

Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work

My best guess (though very much not a confident guess) is the aggregate of these efforts are net-negative, and I think that is correlated with that work having happened in backrooms, often in context where people were unable to talk about their honest motivations. It sure is really hard to tell, but I really want people to consider the hypothesis that a bunch of these behind-the-scenes policy efforts have been backfiring, especially ex-post with a more republican administration.

The chip and SME export controls seem to currently be one of the drivers of the escalating U.S. and China arms race, the RSPs are I think largely ineffectual and have delayed the speed at which we could get regulation that is not reliant on lab supervision, and the overall EU AI act seems very bad, though I think the effect of the marginal help with drafting is of course much harder to estimate. 

Missing from this list: The executive order, which I think has retrospectively revealed itself as being a major driver of polarization of AI-risk concerns, by strongly conflating near-term risk with extinction risks. It did also do a lot of great stuff, though my best guess is we'll overall regret it (but on this I feel the least confident). 

I agree that a ton of concrete political implementation work needs to be done, but I think the people working in the space who have chosen to do that work in a way that doesn't actually engage in public discourse have made mistakes, and this has had large negative externalities. 

See also: https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3

Again, really not confident here, and I agree that there is a lot of implementation work to be done that is not glorious and flashy, but I think a bunch of the ways it's been done in a kind of conspiratorial and secretive fashion has been counterproductive[1]

Ultimately as you say the bottleneck for things happening is political will and buy-in that AI systems pose a serious existential risk, and I think that means a lot of implementation and backroom work is blocked and bottlenecked on that public argumentation happening. And when people try to push forward anyways, they often end up forced to conflate existential risk with highly politicized short-term issues that aren't very correlated with the actual risks, and backfire when the political winds change and people update.

  1. ^

"It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don't think it's uncommon for someone who secretly suspects it's all a load of nonsense to diplomatically say a statement like the above, in "polite EA company".

Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly. 

It's easy for both to be true at the same time right? That is skeptics tone it down within EA, and believers tone it down when dealing with people *outside* EA.

Also, at the risk of saying the obvious, people occupying the ends of a position (within a specific context) will frequently feel that their perspectives are unfairly maligned or censored.

If the consensus position is that minimum wage should be $15/hour, both people who believe that it should be $0 and people who believe it should be $40/hour may feel social pressure to moderate their views; it takes active effort to reduce pressures in that direction. 

As someone who leans on the x-risk-skeptical side, especially regarding AI, I'll offer my anecdote that I don't think my views have been unfairly maligned or censored much.

I do think my arguments have largely been ignored, which is unfortunate. But I don't personally feel the "massive social pressure" that titotal alluded to above, at least in a strong sense.

I think your "vibe" is skeptical and most of your writings are ones expressing skepticism but I think your object-level x-risk probabilities are fairly close to the median?, people like titotal and @Vasco Grilo🔸 have their probabilities closer to lifelong risk of death from a lightning strike than from heart disease. 

Good point, but I still think that many of my beliefs and values differ pretty dramatically from the dominant perspectives often found in EA AI x-risk circles. I think these differences in my underlying worldview should carry just as much weight—if not more—than whether my bottom-line estimates of x-risk align with the median estimates in the community. To elaborate:

On the values side:

  1. Willingness to accept certain tradeoffs that are ~taboo in EA:
    I am comfortable with many scenarios where AI risk increases by a non-negligible amount if this accelerates AI progress. In other words, I think the potential benefits of faster progress in AI development can often outweigh the risks posed by an increase in existential risk.
  2. Relative indifference to human disempowerment:
    With some caveats, I am largely comfortable with human disempowerment, and I don't think the goal of AI governance should be to keep humans in control. To me, the preference for prioritizing human empowerment over other outcomes feels like an arbitrary form of speciesism—favoring humans simply because we are human, rather than due to any solid moral reasoning.

On the epistemic side:

  1. Skepticism of AI alignment's central importance to AI x-risk:
    I am skeptical that AI alignment is very important for reducing x-risk from AI. My primary threat model for AI risk doesn’t center on the idea that an AI with a misaligned utility function would necessarily pose a danger. Instead, I think the key issue lies in whether agents with differing values—be they human or artificial—will have incentives to cooperate and compromise peacefully or whether their environment will push them toward conflict and violence.
  2. Doubts about the treacherous turn threat model:
    I believe the “treacherous turn” threat model is significantly overrated. (For context, this model posits that an AI system could pretend to be aligned with human values until it becomes sufficiently capable to act against us without risk.) I'll note that both Paul Christiano and Eliezer Yudkowsky have identified this as their main threat model, but it is not my primary threat model.

people like titotal and @Vasco Grilo🔸 have their probabilities closer to lifelong risk of death from a lightning strike than from heart disease.

Right. Thanks for clarifying, Linch. I guess the probability of human extinction over the next 10 years is 10^-6, which is roughly my probability of death from a lighting strike during the same period. "the odds of being struck by lightning in a given year are less than one in a million [I guess the odds are not much lower than this], and almost 90% of all lightning strike victims survive" (10^-6 = 10^-6*10*(1 - 0.9)).

Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly.

Agreed, titotal. In addition, I encourage people to propose public bets to whoever has extreme views (if you are confident they will pay you back), and ask them if they are trying to get loans in order to increase their donations to projects decreasing AI risk, which makes sense if they do not expect to pay the interest in full due to high risk of extinction.

and ask them if they are trying to get loans in order to increase their donations to projects decreasing AI risk, which makes sense if they do not expect to pay the interest in full due to high risk of extinction.

Fraud is bad.

In any case, people already don't have enough worthwhile targets for donating money to, even under short timelines, so it's not clear what good taking out loans would do.  If it's a question of putting one's money where one's mouth is, I personally took a 6-figure paycut in 2022 to work on reducing AI x-risk, and also increased my consumption/spending.

That's not fraud, without more -- Vasco didn't suggest that anyone obtain loans that they did not intend to repay, or could not repay, in a no-doom world.

Every contract has an implied term that future obligations are void in the event of human extinction. There's no shame in not paying one's debts because extinction happened.

You cannot spend the money you obtain from a loan without losing the means to pay it back. You can do a tiny bit to borrow against your future labor income, but the normal thing to do is to declare personal bankruptcy, and so there is little assurance for that.

(This has been discussed many dozens of times on both the EA Forum and LessWrong. There exist no loan structures as far as I know that allow you to substantially benefit from predicting doom.)

Hello Habryka. Could you link to a good overview of why taking loans does not make sense even if one thinks there is a high risk of human extinction soon? Daniel Kokotajlo said:

Idk about others. I haven't investigated serious ways to do this [taking loans],* but I've taken the low-hanging fruit -- it's why my family hasn't paid off our student loan debt for example, and it's why I went for financing on my car (with as long a payoff time as possible) instead of just buying it with cash.

*Basically I'd need to push through my ugh field and go do research on how to make this happen. If someone offered me a $10k low-interest loan on a silver platter I'd take it.

I should also clarify that I am open to bets about less extreme events. For example, global unemployment rate doubling or population dropping below 7 billion in the next few years.

I do actually have trouble finding a good place to link to. I'll try to dig one up in the next few days.

Thanks for clarifying, Jason.

Fraud is bad.

I think people like me proposing public bets to whoever has extreme views or asking them whether they have considered loans should be transparent about their views. In contrast, fraude is "the crime of obtaining money or property by deceiving people".

I read Vasco as suggesting exactly that - what is your understanding of what he meant, if not that?

Hi Rebecca,

I did not have anything in particular in mind about what the people asking for loans would do without human extinction soon. In general, I think it makes sense for people to pay their loans. However, since I strongly endorse expected total hedonistic utilitarism, I do not put an astronomical weight on respecting contracts. So I believe not paying a loan is fine if the benefits are sufficiently large.

I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. 

This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,

  • When someone says "AI will kill us all" do people understand us as expressing 100% confidence in extinction, or do they interpret it as mere hyperbole and rhetoric, and infer that what we actually mean is that AI will potentially kill us all or have other drastic effects
  • When someone says "There's a high risk AI kills us all or disempowers us" do people understand this as us expressing very high confidence that it kills us all or as saying it almost certainly won't kill us all.

Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:

  • Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages.
    • Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.)
    • Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter.
    • As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
  • While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
    • Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
    • We're happy to see funders enter the space even if they don’t share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
  • Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We've also seen some cases where an organization can have more impact if they're funded primarily or entirely by non-GV sources. For example, it’s more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OP’s past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
  • We are excited for individuals and organizations that aren't a fit for GV funding to apply to some of OP’s GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we'll recommend it to other funders.
    • To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
  • We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
  • OP's AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
  • We think it's fine and normal for grantees to disagree with us, even in substantial ways. We've funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
  • I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
  • Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
  • I'll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
    • Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
    • We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP's funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a "move fast and break [the world, in this case]" approach.
    • We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.

I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.

Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.

Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well-written post didn't say that.

Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It's generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.

  1. ^

    I place this in quotes because the term is ambiguous.

Good Ventures did indicate to us some time ago that they don't think they're the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment's claim about an aversion to opportunities that are "even slightly right of center in any policy work," (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.

Also, I don't actually think this should affect people's actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is "available opportunities that look high-impact from an AI GCR perspective," not "available funding." If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can't guarantee we'll think it's promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we've simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.

Can you say what the "some kinds" are? 

I think it might be a good idea to taboo the phrase "OP is funding X" (at least when talking about present day Open Phil). 

Historically, OP would have used the phrase "OP is funding X" to mean "referred a grant to X to GV" (which approximately was never rejected). One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was "rejecting X" or "defunding X"). 

Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term "OP is funding X" is confusing (including IMO in your comment, where in your last few bullet points you talk about "OP has given far more to global health than AI" when I think to not confuse people here, it would be good to say "OP has recommended far more grants to global health", since OP itself has not actually given away any money directly, and in the rest of your comment you use "recommend").

I think the key thing for people to understand is why it no longer makes sense to talk about "OP funding X", and where it makes sense to model OP grant-referrals to GV as still closely matching OPs internal cost-effectiveness estimates.[1] 

For organizations and funders trying to orient towards the funding ecosystem, the most important thing is understanding what GV is likely to fund on behalf of an OP recommendation. So when people talk about "OP funding X" or "OP not funding X" that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[2]

Now having gotten that clarification out of the way, my guess is most of the critiques that you have seen about OP funding are basically accurate when seen through this lens (though I don't know what critiques you are referring to, since you aren't being specific). As an example, as Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.

I think your comment is useful, and helps people understand some of how OP is trying to counteract the ways GV's withdrawal from many crucial funding areas has affected things, which I am glad about. I do also think your comment has far too much of the vibe of "nothing has changed in the last year" and "you shouldn't worry too much about which areas GV wants or want to not fund". De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.

I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:

Yes, over the last 1-2 years our relationship to GV has changed, and I think it no longer really makes sense to think about OP 'funding X'. These days, especially in the catastrophic risk space, it makes more sense to think of OP as a middleman between grantees and other foundations and large donors. This is a large shift, and I think understanding how that shift has changed funding allocation is of crucial importance when trying to predict which projects in this space are underfunded, and what new projects might be able to get funding.

95%+ of recommendations we make are to GV. When GV does not want to fund something, it is up to a relatively loose set of external funders we have weaker relationships with to make the grant, and will hinge on whether those external funders have appetite for that kind of grant, which depends heavily on their more idiosyncratic interests and preferences. Most grants that we do not refer to GV, but would like to see funded, do not ultimately get funded by other funders.[3]

[Add the rest of your comment, ideally explaining how GV might differ from OP here[4]]

  1. ^

    And another dimension to track is "where OPs cost-effectiveness estimates are likely to be wrong". I think due to the tricky nature of the OP/GV relationship, I expect OP to systematically be worse at making accurate cost-effectiveness estimates where GV has strong reputation-adjacent opinions, because of course it is of crucial importance for OP to stay "in-sync" with GV, and repeated prolonged disagreements are the kind of thing that tend to cause people and organizations to get out of sync.

  2. ^

    Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).

  3. ^

    I don't know how true this is and of course you should write what seems true to you here. I currently think this is true, but also "60% of grants referred get made" would not be that surprising. And also of course this is a two-sided game where OP will take into account whether there are any funders even before deciding whether to evaluate a grant at all, and so the ground truth here is kind of tricky to establish.

  4. ^

    For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it's much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.

Replying to just a few points…

I agree about tabooing "OP is funding…"; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.

Re: "large negative incentive for founders and organizations who are considering working more with the political right." I'll note that we've consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that "engages with the right" (your phrasing), e.g. Horizon fellows and many other GV grantees regularly engage with Republicans, and seem likely to do even more of that on the margin given the incoming GOP trifecta.

Re: "nothing has changed in the last year." No, a lot has changed, but my quick-take post wasn't about "what has changed," it was about "correcting some misconceptions I'm encountering."

Re: "De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing." This isn't true, including specifically for my team ("AI governance and policy").

I also don't think this was ever true: "One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV." There's plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.

Re: "nothing has changed in the last year." No, a lot has changed, but my quick-take post wasn't about "what has changed," it was about "correcting some misconceptions I'm encountering."

Makes sense. I think it's easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it's a tempting and recurring error mode for established institutions to react to people freaking out with a "calm down" attitude, even when that's inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)

In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).

Re: "De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing." This isn't true, including specifically for my team ("AI governance and policy").

I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn't make any sense to count that as something OP "influenced". 

With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don't think it's true in "AI governance and policy" where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn't imply my betting odds, since I want at least a bit of profit margin). 

Happy to refer to some trusted third-party arbiter for adjudicating.

I’m confused by the wording of your bet - I thought you had been arguing than more than 90% are by GV, not ‘more than 90% are by a non-GV funder’

Sorry, just a typo!

I'd rather not spend more time engaging here, but see e.g. this.

Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven't seen (and don't currently think exist).

Even at 100% credit, which seems like a big stretch, my guess is you don't get over 5%. 

To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it's bad form given that to just respond with a "this was never true" when it's clearly and obviously been true in some past years, and it's at the very least very close to true this year).

  1. ^

    Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.

  2. ^

    I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now

  3. ^

    https://www.goodventures.org/our-portfolio/grantmaking-approach/

I also don't think this was ever true: "One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV." There's plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.

I used the double negative here very intentionally. Funding recommendations don't get made by majority vote, and there isn't such a thing as "the Open Phil view" on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant. 

That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying "this was never true" is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).

I'll note that we've consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding.

I don't currently believe this, and think you are mostly not exposed to most people who could be doing good work in the space (which is downstream of a bunch of other choices OP and GV made), and also overestimate the degree to which OP is helpful in getting the relevant projects funding (I know of 1-2 projects in this space which did ultimately get funding, where OP was a bit involved, but my sense is was overall slightly anti-helpful).

If you know people who could do good work in the space, please point them to our RFP! As for being anti-helpful in some cases, I'm guessing that was cases where we thought the opportunity wasn't a great opportunity despite it being right-of-center (which is a point in favor, in my opinion), but I'm not sure.

I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages

(Fwiw, the Metaculus crowd prediction on the question ‘Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?’ currently sits at 43%.)

[1] Several of our grantees regularly criticize leading AI companies in their official communications [2] organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose

Could you give examples of these?

Curated and popular this week
Relevant opportunities