Is there a write up on why the “abundance and growth” cause area is an actually relatively efficient way to spend money (instead of a way for OpenPhil to be(come) friends with everyone who’s into abundance & growth)? (These are good things to work on, but seem many orders of magnitude worse than other ways to spend money.)
(The cited $14.4 of “social return” per $1 in the US seems incredibly unlikely to be comparable to the best GiveWell interventions or even GiveDirectly.)
Prior discussion here, especially a long comment by Alexander Berger. I copied over one notable quote from that below.
I'm not aware of any convincing public justification for spending monies in this area as a better choice than spending in traditional cause areas, but I also don't see evidence that abundance and growth is trying to compete with traditional EA cause areas for funding from the broader public.
In terms of scale, while this is a significant expansion of Open Phil’s overall work in the space, it’s a modest expansion of Good Ventures’ (from ~$15M to ~$20M/year). The remaining funding is coming from other donors. As we wrote in our annual review last week:
One implication of our growing work with other donors is that it’s increasingly incorrect to think about Open Philanthropy as a single unified funder making top-down decisions. Increasingly, our resources come from different partners who are devoted to different causes and have different preferences and limitations for their giving. Their philanthropic dollars are not fungible, and we would be doing them a disservice if we treated them as if they were... it’s clearly less true than in the past (not that it was ever perfectly true) that the distribution of grants we advise across causes reflects our leadership’s unconstrained recommendations.
I’ve been meaning to write a longer post about my concerns with this cause area, including the high levels of political risk it exposes the EA movement to, and why we should be wary of that post-FTX. For example, I think it was unwise to sponsor a conference which invited a guy championing ‘deportation abundance’. And that’s not even the most controversial conference they sponsored this year (the author here has already formed an association with effective altruism, also thankfully didn’t notice who funded the conference). (I will get to the rest of Yarrow’s comment later but this was a bad memory of mine; I had read that Open Philanthropy were going to fund Abundance festival well before it happened, and assumed they had funded WelcomeFest due to sharing speakers and the abundance ideology).
(I don’t have this critique fully formed enough to share it on the forum in much more detail)
Did Good Ventures or Open Philanthropy Coefficient Giving sponsor WelcomeFest? What was the other conference you're referring to?
I did a brief search, and I couldn't find evidence of this. Are you sure you're getting that right? I don't know what other conference you're referring to, so I couldn't check that.
I also skimmed the list of grants here. I don't recognize most of the names, but nothing jumped out to me as looking like a conference.
[Edited on Nov. 20, 2025 at 3:50 AM Eastern to add:
To save the reader the suspense (or the effort of scrolling down), Coefficient Giving did not sponsor WelcomeFest, but did sponsor another conference, Abundance 2025, which, to me, appears harmless and inoffensive, inasmuch as anything contentious in American politics today can be.
Some invitees may have some harmful or offensive views, but that will be true of any U.S. conference about politics or policy where a diversity of viewpoints representative of the country are allowed.]
(Disclaimer that I'm Canadian, so you may feel free to discount or downweight my opinions on U.S. politics as you like. Canada is in an unusual situation with regard to the U.S., where everything in U.S. politics casts a long shadow over Canada, so Canadians are unusually keyed into events in U.S. politics.)[1] The unfortunate thing about U.S. politics, especially now, is that it's an ugly, messy business that involves doing deals and forming coalitions with people you'd rather not associate with, in a ideal world where you had the freedom to choose that kind of thing. Democrats have to do deals with Republicans. Democrats have to build a coalition strong enough to resist authoritarianism, illiberalism, and democratic backsliding that includes people far apart from each other on the political spectrum, who have meaningful, substantive, and sometimes bitter disagreements, who in many cases have serious, legitimate grievances with each other. It's unfortunate.
And, it should go without saying, to win elections going forward, Democrats have to win the votes of people who voted for Trump.
I think it's completely legitimate to level this sort of critique against Manifold's Manifest conference. First off, that's a conference mainly for the Bay Area rationalist community and somewhat for the Bay Area EA community, and not a conference about U.S. national politics. So, it's not about coalition building, winning over Republican voters, doing deals with Republican lawmakers, or anything like that.
Second, and more importantly, it's an entirely different matter to want no association with someone like Curtis Yarvin (who threw an afterparty for the Manifest conference). Yarvin says things I think the median Republican voter would find repugnant and crazy. I can't imagine the median Republican would have anything but rage or incredulity for the idea that America should become a "neo-monarchy". Yes, Republican voters have been surprisingly tolerant of Trump's illiberalism, and, yes, the Republican Party has both a recent problem with and a long history of racism, but people like Yarvin are still on the margins of the party, not at the median.
I think if Open Philanthropy Coefficient Giving or Good Ventures were funding something super controversial and alarming to a lot of people, like, I don't know, research into genetically engineering babies with enhanced abilities, then it would be incumbent on the effective altruist community to give some kind of response to that. In that hypothetical, it would be important to clarify to the public that the community is a separate entity from Dustin Moskovitz's and Cari Tuna's organizations, and to clarify that this community doesn't decide and can't control what they fund. However, that's not what is happening here.
Coefficient Giving's work in this area is split into two parts, housing policy reform and metascience (or "innovation policy", as they put it, but I prefer metascience). Housing policy reform is a popular, liberal, centre-left, mainstream idea in U.S. politics. This summer, the California State Assembly passed two bills that enact exactly the sort of housing policy reform that Coefficient Giving is trying to support. These bills were popular among California voters. 74% of voters expressed support for the bills in a poll, with 14% against and 11% unsure. Governor Gavin Newsom, who played a key role in the passage of the housing policy reform bills, has a 54% approval rating among Californians, compared to a 26% approval rating for Trump.
You can agree or disagree with housing policy reform, but it's not a reputational risk for Coefficient Giving or for EA. It's popular. People like it. People like the politicians who champion it. And people especially like the results, which is increased housing affordability.[2]
What about the other half of Coefficient Giving's "Abundance & Growth" focus area, metascience? I can't imagine how metascience would pose reputational risks for anyone. Currently, metascience is not a partisan or polarized issue, and I pray it stays that way. The core idea of metascience is doing science on science: running experiments on different ways of doing science, particularly in terms of how research funding is allocated. Different institutions have different models for funding science. Compare, say, the NSF with DARPA. Nobody is saying the NSF should become like DARPA. What they are saying is that there should be experimentation with different funding models to find out what's most effective.
Here's a quote from Ezra Klein and Derek Thompson's book Abundance which explains just one of the reasons why proponents of metascience think there is probably room for improvement:
To appreciate the explosion of scientific paperwork requirements, imagine if every scientist working in America contracted a chronic fatigue disorder that made it impossible for them to work for half of the year. We would consider this to be a national tragedy and an emergency. But this make-believe disorder is not so dissimilar to the burden we place on scientists today when it comes to paperwork. Today’s scientists spend up to 40 percent of their time working on filling out research grants and follow-up administrative documents, rather than on direct research. Funding agencies sometimes take seven months or longer to review an application or request a resubmission.
“Folks need to understand how broken the system is,” said John Doench, the director of research and development in functional genomics at the Broad Institute. “So many really, really intelligent people are wasting their time doing really, really uninteresting things: writing progress reports, or coming up with modular budgets five years in advance of the science, as if those numbers have any meaning. Universities have whole floors whose main job is to administer these NIH grants. Why are we doing this? Because they’re afraid that I’m going to buy a Corvette with the grant money?”
Bernie Sanders was recently asked about abundance liberalism in an interview with the New York Times. I think Sanders intended his response to be dismissive or critical, but he actually ended up acknowledging that Klein and Thompson are correct about their core argument. Sanders said:
Leonhardt: I know. Let’s talk about another debate that has gotten people excited — and I’m really curious about your view: the abundance debate. Which is this idea that one of the things that government needs to do and progressives need to do is clear out bureaucracy so that our society can make more stuff — homes, clean energy. What do you think of the abundance movement?
Sanders: Well, it’s got a lot of attention among the elite, if I may say so.
Leonhardt: Yes.
Sanders: Look, if the argument is that we have a horrendous bureaucracy? Absolutely correct. It is terrible. Over the years, I brought a lot of money into the state of Vermont. It is incredible, even in a state like Vermont — which is maybe better than most states — how hard it is to even get the bloody money out! Oh, my God! We’ve got 38 meetings! We’ve got to talk about this. Unbelievable.
I worked for years to bring two health clinics that we needed into the state of Vermont. I wanted to renovate one and build another one. You cannot believe the level of bureaucracy to build a bloody health center. It’s still not built. All right? So I don’t need to be lectured on the nature of bureaucracy. It is horrendous, and that is real.
But that is not an ideology. That is common sense. Any manager — you’re a corporate manager, you’re a mayor, you’re a governor — you’ve got to get things done. And the bureaucracy — federal bureaucracy, many state bureaucracies — makes that very, very difficult. But that is not an ideology.
It’s good government. That’s what we should have. Ideology is: Do you create a nation in which all people have a standard of living? Do you have the courage to take on the billionaire class? Do you stand with the working class? That’s ideology. Breaking through bureaucracy and creating efficiencies? That’s good government.
Leonhardt: But it would be a meaningful change if states were able to reduce bureaucracy. It may not be an ideology, but it doesn’t happen today.
Sanders: Get things done!
Leonhardt: And you agree that we should do more of that?
Sanders: Absolutely.
Leonhardt: That we should have policy changes to simplify things, to deliver —
Sanders: I did my best when I was mayor — we’re a small city of 40,000 people — to break through the bureaucracy. And I was a good mayor. So there’s no question that you have people who it seems to be their function in life is to make sure things don’t happen. We should not be paying people to do that.
I take that as a ringing endorsement from Bernie Sanders for abundance liberalism. That's actually one of the strongest endorsements of the Abundance thesis I've heard from any politician, possibly the strongest. Sanders is saying: what Klein and Thompson are arguing is so obviously correct, it's common sense.
It was intended as a criticism, I think, but Sanders was essentially saying: you couldn't be more wrong if you don't see the truth in Klein and Thompson's thesis about inefficient bureaucracy. If you don't realize this is a real, horrendous problem in government, well, clearly, you've never been a mayor or a governor.
Sanders is of course correct that the idea of good government, of housing affordability, of metascience, of public infrastructure like high-speed rail built on budget and on time (by in-house, government-employed engineers, rather than private contractors), etc.[3] is not a full political ideology. And abundance liberalism is not supposed to be a full political ideology. It's a set of ideas that is supposed to fit in within the context of American liberalism. A complement to other ideas, not a replacement.
Some people have levelled the critique at Klein and Thompson: but economic populist policies are more popular with voters in polls than abundance policies. Klein and Thompson's response: why not do both? They're compatible, and politicians should do what their voters want them to do. For example, there's no reason a city or a state can't make it much easier to build housing, both affordable housing and market-rate, and also increase the funding it puts toward affordable housing, or mandate housing developers to build a certain ratio of affordable housing to market-rate housing — as long as you make it easier for them to build housing in the first place. (Ezra Klein has specifically endorsed this idea.)
I think, as with many big ideas, abundance liberalism is a ball that many different people, sometimes with quite different political orientations from each other, want to take and run with in their own direction. Bernie Sanders' or Zohran Mamdani's version of abundance might take a different shape than, say, for a moderate Democratic governor of a purple state. That's normal. That's politics. (It's not perfect or ideal, but it's the world we live in, and the one we've got to work with.)
I'm not particularly bothered if conservatives like the one you quoted want to "troll the libs" by misapplying the term "abundance" to things like deportations — I mean, it annoys me, but it doesn't make me think abundance liberalism is a bad idea. Internet trolls always try to twist everything good and ruin it. (This is part of why I think Twitter is a waste of time, there's just so much deliberate provocation and trying to be edgy or attention-grabbing.) I don't know what conference you were referring to that he was invited to, [edit: it was Abundance 2025] but he works for a conservative policy think tank, and this gets back to my original point that policy conferences or political conferences will probably have to include people from across the political spectrum, from both major U.S. parties, like it or not.[4]
Abundance liberalism can, in theory, be taken in a direction that people like Ezra Klein and Derek Thompson, who coined the term, wouldn't like and would never endorse. But so what? Anything could, and people try to do that with almost everything. It's on us to be mindful and discerning. If we throw out every good idea in the world the second somebody tries to do something bad with it, we'll have no good ideas. I don't buy the idea that Coefficient Giving's association with abundance liberalism is a reputational risk for EA because a) it's popular (not just with voters, but with Democratic politicians from Gavin Newsom to Zohran Mamdani, and arguably even Bernie Sanders agrees with it in his own begrudging way), b) it's a good idea (e.g. look at measures of housing affordability in places that have reduced bureaucracy and made it easier to build),[2] and c) just because some people want to take it in a bad direction or tarnish its good name doesn't mean they'll succeed — they probably won't.
You don't have to agree that it's a good idea. You don't have to agree that it's as popular as I'm making out — although I'd invite you to look at the polling for the California housing bills. But I really don't see a plausible way this could be a reputational risk for EA. It's politics, and, yeah, politics is controversial, but this is very mainstream, acceptable politics, getting funded by a large philanthropic organization that the EA community doesn't control, which is currently in the process of broadening its donor base and its focus areas beyond effective altruism or what the EA community would choose to prioritize. What's the big whoop?
If you want to know my political orientation, I'm LGBT, I voted for the New Democratic Party (NDP) in the most recent Canadian federal election, I enjoyed the economist Thomas Piketty's book Capital in the Twenty-First Century, and I'm a big fan of Ezra Klein, so whatever that tells you.
In Minneapolis, Minnesota: "Using a synthetic control approach we find that the reform lowered housing cost growth in the five years following implementation: home prices were 16% to 34% lower, while rents were 17.5% to 34% lower relative to a counterfactual Minneapolis constructed from similar metro areas."
In Austin, Texas: "The median asking rent in Austin dropped 10.7% year over year to $1,420 in March — $379 below its record high. "
These are all examples taken from Ezra Klein and Derek Thompson's Abundance book. It's particularly important to note that they advocate for the government of California to employ its own engineers in-house — government employees, not private contractors — to complete its long-languishing high-speed rail project.
This is just one example of several strongly anti-neoliberal stances Klein and Thompson take in the book. Another example is their strong support of government science funding (see the chapter about metascience). A third example is their strong advocacy of industrial policy, particularly around sustainable energy. In addition to these specific anti-neoliberal stances, the book also includes a section explicitly criticizing neoliberalism.
I bring this up because one of the most common critiques of the book I've seen online is that it's "neoliberal". This is why you should read books, rather than read tweets about books from people who haven't read them. I largely believed these criticisms before I read the book and then was furious when, upon reading it, I found out I had been misled by people who didn't read the book.
I don't know if this analogy will help or hurt, but an analogy that makes sense in my head is falling birth rates. Falling birth rates is also a ball different people of different political persuasions can run with in different directions. From a feminist and welfare state/social democratic perspective, you can see falling birth rates — particularly in conjunction with people saying they want to have kids, but it's too difficult — and think about how the government can better support parents or prospective parents, particularly from the angle of gender equality. Women often say they want to have kids, but are daunted by taking on the additional care work and domestic work of parenting when they already have a career — which might be impacted by having a kid. This can be a concern for men, too, but unequally so, because of the unequal burden of parenting and domestic work that falls on women. What policies could conceivably improve this situation and allow women who want to have kids to do so? This is an incredibly liberal, progressive, social democratic perspective on the issue.
On the other hand, some conservatives have expressed strange ideas about how to address falling birth rates, like trying to make people more religious. Even assuming that people becoming more religious would make them have more kids, I don't know how you make people more religious. I especially don't know how you make them more religious not because God exists and you want them to have a good relationship with him, but because you want them to have more babies. In any case, this is an entirely opposite response to the feminist, pro-government response I outlined above.
Some liberals or people on the left argue that liberals/the left shouldn't even discuss declining birth rates because to do so is to automatically support regressive political responses, like an attempted to return to historical levels of religiosity or restrictions on abortion. I think this is incredibly misguided. Ignoring an issue that affects people's lives in a big way, or pretending that issue doesn't exist, is not an acceptable political response. That is a betrayal of the public, of the people, by politicians. That is also the kind of thing that loses politicians elections, and gives power to opposing politicians who have more regressive policy ideas, like banning abortion.
I’ll look at this properly later but just wanted to confirm that I got it wrong about WelcomeFest. I’d read a tweet about Open Philanthropy sponsoring Abundance 2025 around the same time WelcomeFest was happening, and conflated the two due to having similar speakers and an explicit pro-abundance position.
Okay, yes, Open Philanthropy is listed as one of the sponsors of the Abundance 2025 conference that took place in Washington, D.C. in September. Is this a problem for any reason? Was there anything about that conference that was troubling or controversial? What’s the reputational risk, here?
(I’m not taking a position here on whether I think Abundance 2025 should have invited speakers it explicitly disagrees with, or whether my impression is that Abundance 2025 endorses or disendorses his views—just correcting you on that specific point)
Yes, I believed you when you said he was invited to a conference related to abundance. I was just saying he doesn’t represent abundance liberalism.
First, he’s a conservative, so he isn’t even a liberal in the first place. Second, you very helpfully linked to that book review where he says Klein and Thompson’s Abundance book is "fundamentally misguided" and that "a ‘politics of abundance’ is an oxymoron".
This confirms what I said above that this guy is just "trolling the libs" by intentionally misusing the word "abundance". This should not be a relevant consideration for whether Open Philanthropy Coefficient Giving wants to support policy reform related to abundance liberalism. But I think your point is just about sponsoring the conference.
If you have political conferences or policy conferences where you invite conservatives and Republicans, it’s going to be pretty much impossible to avoid inviting people who have offensive or problematic views, since that is core to the Republican Party and mainstream American conservatism right now. I don’t see how associating with Republicans or conservatives in some way is avoidable if a philanthropic organization like Open Philanthropy Coefficient Giving wants to be involved in politics or policy. Everyone in politics/policy has to in some way, including Democratic lawmakers.
And it doesn’t seem like there’s any good alternative.
I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAIare probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.
Okay, thanks, so FAI — the Foundation for American Innovation. What's the relation between FAI and Open Philanthropy Coefficient Giving? Has Coefficient Giving given grant money to FAI?
Oh, you must just be referring to the fact that FAI "co-hosted" the Abundance 2025 conference. I actually have no idea what the list of "co-hosts" on the website means — there are 15 of them. I have no context for what this means.
You disapprove even of those grants related to AI safety?
For me, it's all very theoretical because AI capabilities currently aren't very consequential for good or for ill, and the returns to scaling compute and data seem to be very much in decline. So, I don't buy that either immediate-term, mundane AI safety or near-term AI x-risk is a particularly serious concern.
There are some immediate-term, mundane concerns with how chatbots talk to users with certain kinds of mental health problems, and things of that nature, but these are comparatively small problems in the grand scheme of things. Social media is probably 10x to 1,000x more problematic.
Uh huh, you got me on a technicality. Let me clarify that I see the social problems associated with social media, including the ML-based recommender systems they use, as far more consequential than the social problems associated with LLM-based chatbots.
The recommender systems are one part of why social media is problematic, but not nearly the whole story.
I think looking at the problems of social media through the lens of "AI safety" would be too limiting and not helpful.
I suspect that part of the theory of impact here might not run through any individual grant item (ie, liberalized zoning laws leading to economic growth through increased housing construction in some particular city), but rather through a variety of bigger-picture considerations that look something like:
The overall state / quality of US politics is extremely important, because the US is the most powerful country in the world, etc. Improving the state of US politics even a little (ie by making it more likely that smart, thoughtful people will be in power, make good decisions, implement successful reforms, etc) seems like an important point of leverage for many very important causes (consider USAID cuts, AI chip export controls to China, and foreign policy especially concerning great power relations, nuclear nonproliferation, preserving democracy and broad human influence over the future, continued global economic growth, etc).
Of course "fighting for influence over US politics" is gonna seem less appealing once you take into account the fact that it is in a certain sense the least-neglected possible cause, has all sorts of deranging / polarizing / etc side-effects, and so forth. But maybe, even considering all these things, influencing US politics still seems very worthwhile. (This seems plausible to me.)
Promoting the abundance movement seems like a decent idea for both improving the US Democratic party (in terms of focusing it on smarter, more impactful ideas) and perhaps making the Democrats more likely to win elections (which is great if you think Dems are better than the current Republican party), and maybe even improve the Republican party too (if the abundance agenda proves to be a political winner and the right is forced to compete by adopting similar policies). And, as a plus, promoting this pro-growth, liberal/libertarian agenda seems a little less polarizing that most other conceivable ways of engaging with US politics.
People have wondered for a long time if, in addition to direct work on x-risks, one should consider intermediate "existential risk-factors" like great power war. It seems plausible to me that "trying to make the United States more sane" is a pretty big factor in many valuable goals -- global health & development, existential risk mitigation, flourishing long-term futures, and so forth.
Hm. Interesting. I didn't know this was an Open Philanthropy focus area. Webpage here.
I read the book by Abundance by Ezra Klein and Derek Thompson earlier this year and loved it. It's one of my favourite non-fiction books I've read recently. (Since then, other people have taken up the "Abundance" label, but I haven't kept track of who are they are, how similar/different their views are to Klein and Thompson's in the book, or whether I agree with them.)
I wouldn't say Open Phil's "Abundance & Growth" focus area is necessarily many orders of magnitude worse than global health/global poverty or conventional global catastrophic risks like pandemics. (Whether you think AGI-based global catastrophic risks are many orders of magnitude more cost-effective to focus on than "Abundance & Growth" depends on disputed assumptions I almost certainly strongly disagree with you about.)
The two parts of the "Abundance & Growth" focus area are currently housing policy reform, i.e. YIMBYism, and innovation policy, which seems closely related to metascience, about which there is a chapter in Klein and Thompson's Abundance book.
Housing policy reform is intrinsically very important. It's also important because of what it means for U.S. politics. Democrats need to get a handle on all aspects of affordability, especially housing affordability. The Trump administration's and Republican Party's scary tilt toward illiberalism and authoritarianism needs strong challengers. Housing affordability in particular and affordability in general is a reason Democrats aren't more popular than they are, and a reason they haven't been able to mount as strong a challenge to Trump's illiberal/authoritarian tactics as I wish they could have so far. Much not only in the U.S. but around the world depends on whether the U.S. stays a full liberal democracy. The United States has dropped considerably in comparative assessments of countries' level of freedom or democracy. This worries me, and although the effects are hard to quantify rigorously, obviously they are huge. USAID was one of the first casualties of Trump's current administration.
Metascience and innovation policy seem highly uncertain, but also extremely worth trying. The metascience chapter in the Abundance book was probably the most exciting. If the speed of progress in science and technology can be significantly increased by policy reform or institutional reform, or by creating new institutions, then the benefits are also hard to quantify rigorously but also surely must be huge.
So, overall, I think I tentatively support Open Philanthropy getting into these two areas. It, of course, depends on what exactly they're doing, though.
At the beginning of November, I learned about a startup called Red Queen Bio, that automates the development of viruses and related lab equipment. They work together with OpenAI, and OpenAI is their lead investor.
On November 13, they publicly announced their launch. On November 15, I saw that and made a tweet about it: Automated virus-producing equipment is insane. Especially if OpenAI, of all companies, has access to it. (The tweet got 1.8k likes and 497k views.)
In the tweet, I said that there is, potentially, literally a startup, funded by and collaborating with OpenAI, with equipment capable of printing arbitrary RNA sequences, potentially including viruses that could infect humans, connected to the internet or managed by AI systems.
I asked whether we trust OpenAI to have access to this kind of equipment, and said that I’m not sure what to hope for here, except government intervention.
The only inaccuracy that was pointed out to me was that I mentioned that they were working on phages, and they denied working on phages specifically.
At the same time, people close to Red Queen Bio publicly confirmed the equipment they’re automating would be capable of producing viruses (saying that this equipment is a normal thing to have in a bio lab and not too expensive).
A few days later, Hannu Rajaniemi, a Red Queen Bio co-founder and fiction author, responded to me in a quote tweetand in comments:
This inaccurate tweet has been making the rounds so wanted to set the record straight.
We use AI to generate countermeasures and run AI reinforcement loops in safe model systems that help train a defender AI that can generalize to human threats
The question of whether we can do this without increasing risk was a foundational question for us before starting Red Queen. The answer is yes, with certain boundaries in place. We are also very concerned about AI systems having direct control over automated labs and DNA synthesis in the future.
They did not answer any of the explicitly asked questions, which I repeated several times:
- Do you have equipment capable of producing viruses? - Are you automating that equipment? - Are you going to produce any viruses?
- Are you going to design novel viruses (as part of generating countermeasures or otherwise)? - Are you going to leverage AI for that?
- Are OpenAI or OpenAI’s AI models going to have access to the equipment or software for the development or production of viruses?
It seems pretty bad that this startup is not being transparent about their equipment and the level of possible automation. It’s unclear whether they’re doing gain-of-function research. It’s unclear what security measures they have or are going to have in place.
I would really prefer for AIs, and for OpenAI (known for prioritizing convenience over security)’s models especially, to not have ready access to equipment that can synthesize viruses or software that can aid virus development.
My instantaneous, knee-jerk reaction (so take it with a grain of salt) is that the Red Queen Bio co-founder’s responses are satisfactory and reassuring. Your concerns are based on an unsourced rumour and speculation, which are always in unlimited supply and don’t warrant a response from a company in every case.
You also don’t seem to be updating rationally on the responses you are receiving, but just doubling-down on your original hunch, which by now seems like it’s probably false.
Not all tweets merit a response, so it doesn’t matter whether they continue to answer your questions or not.
Horizon Institute for Public Service is not x-risk-pilled
Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled.
(Edit: some people reached out to me to say that they've had different experiences with a minority of Horizon people.)
"Is Horizon x-risk pilled?" feels like a misguided question. The organization doesn't claim to be, and it would also be problematic if the organization were acting in an x-risk-pilled-way but but deceitful about it. I'm certainly confident that some Horizon people/fellows are personally x-risk-pilled, and some are not.
For x-risk-focused donors, I think the more reasonable question is: How much should we expect 'expertise and aptitude around emerging tech policy' (as Horizon interprets it) to correlate with the outcomes those donors care about? One could reasonably conclude that that correlation's low or even negative. But it's also not like there's a viable counterfactual 'X-risk-pilled Institute for Public Service' that would achieve a similar level of success at placing fellows.
(I'd guess you might directionally agree with this and just think the correlation isn't that high, but figured I'd comment to at least add the nuance).
Relatedly, @MichaelDickens shallow-reviewed Horizon just under a year ago—see here.[1] Tl;dr: Michael finds that Horizon’s work isn’t very relevant to x-risk reduction; Michael believes Horizon is net-negative for the world (credence: 55%).
(On the other hand, it was Eth, Perez and Greenblatt—i.e., people whose judgement I respect—who recommended donating to Horizon in that post Mikhail originally commented on. So, I overall feel unsure about what to think.)
I've seen a number of people I respect recommend Horizon, but I've never seen any of them articulate a compelling reason why they like it. For example in that comment you linked in the footnote, I found the response pretty unpersuasive (which is what I said in my follow-up comment, which got no reply). Absence of evidence is evidence of absence, but I have to weigh that against the fact that so many people seem to like Horizon.
A couple weeks ago I tried reaching out to Horizon to see if they could clear things up, but they haven't responded. Although even if they did respond, I made it apparent that the answer I'm looking for is "yes Horizon is x-risk-pilled", and I'm sure they could give that answer even if it's not true.
I do not believe Anthropic as a company has a coherent and defensible view on policy. It is known that they said words they didn't hold while hiring people (and they claim to have good internal reasons for changing their minds, but people did work for them because of impressions that Anthropic made but decided not to hold). It is known among policy circles that Anthropic's lobbyists are similar to OpenAI's.
From Jack Clark, a billionaire co-founder of Anthropic and its chief of policy, today:
Dario is talking about countries of geniuses in datacenters in the context of competition with China and a 10-25% chance that everyone will literally die, while Jack Clark is basically saying, "But what if we're wrong about betting on short AI timelines? Security measures and pre-deployment testing will be very annoying, and we might regret them. We'll have slower technological progress!"
This is not invalid in isolation, but Anthropic is a company that was built on the idea of not fueling the race.
Do you know what would stop the race? Getting policymakers to clearly understand the threat models that many of Anthropic's employees share.
It's ridiculous and insane that, instead, Anthropic is arguing against regulation because it might slow down technological progress.
What if we’re right about AI timelines? What if we’re wrong? Recently, I’ve been thinking a lot about AI timelines and I find myself wanting to be more forthright as an individual about my beliefs that powerful AI systems are going to arrive soon – likely during this Presidential Administration. But I’m struggling with something – I’m worried about making short-timeline-contingent policy bets.
So far, the things I’ve advocated for are things which are useful in both short and long timeline worlds. Examples here include:
Building out a third-party measurement and evaluation ecosystem.
Encouraging governments to invest in further monitoring of the economy so they have visibility on AI-driven changes.
Advocating for investments in chip manufacturing, electricity generation, and so on.
Pushing on the importance of making deeper investments in securing frontier AI developers.
All of these actions are minimal “no regret” actions that you can do regardless of timelines. Everything I’ve mentioned here is very useful to do if powerful AI arrives in 2030 or 2035 or 2040 – it’s all helpful stuff that either builds institutional capacity to see and deal with technology-driven societal changes, or equips companies with resources to help them build and secure better technology.
But I’m increasingly worried that the “short timeline” AI community might be right – perhaps powerful systems will arrive towards the end of 2026 or in 2027. If that happens we should ask: are the above actions sufficient to deal with the changes we expect to come? The answer is: almost certainly not!
[Section that Mikhail quotes.]
Loudly talking about and perhaps demonstrating specific misuses of AI technology: If you have short timelines you might want to ‘break through’ to policymakers by dramatizing the risks you’re worried about. If you do this you can convince people that certain misuses are imminent and worthy of policymaker attention – but if these risks subsequently don’t materialize, you could seem like you’ve been Chicken Little and claimed the sky is falling when it isn’t – now you’ve desensitized people to future risks. Additionally, there’s a short- and long-timeline risk here where by talking about a specific misuse you might inspire other people in the world to pursue this misuse – this is bound up in broader issues to do with ‘information hazards’.
These are incredibly challenging questions without obvious answers. At the same time, I think people are rightly looking to people like me and the frontier labs to come up with answers here. How we get there is going to be, I believe, by being more transparent and discursive about these issues and honestly acknowledging that this stuff is really hard and we’re aware of the tradeoffs involved. We will have to tackle these issues, but I think it’ll take a larger conversation to come up with sensible answers.
In context Jack Clark seems to be arguing that he should be considering short timeline, 'regretful actions' more seriously.
In RSP, Anthropic committed to define ASL-4 by the time they reach ASL-3.
With Claude 4 released today, they have reached ASL-3. They haven’t yet defined ASL-4.
Turns out, they have quietly walked back on the commitment. The change happened less than two months ago and, to my knowledge, was not announced on LW or other visible places unlike other important changes to the RSP. It’s also not in the changelog on their website; in the description of the relevant update, they say they added a new commitment but don’t mention removing this one.
Anthropic’s behavior is not at all the behavior of a responsible AI company. Trained a new model that reaches ASL-3 before you can define ASL-4? No problem, update the RSP so that you no longer have to, and basically don’t tell anyone. (Did anyone not working for Anthropic know the change happened?)
When their commitments go against their commercial interests, we can’t trust their commitments.
You should not work at Anthropic on AI capabilities.
[This comment is no longer endorsed by its author]Reply
This is false. Our ASL-4 thresholds are clearly specified in the current RSP—see "CBRN-4" and "AI R&D-4". We evaluated Claude Opus 4 for both of these thresholds prior to release and found that the model was not ASL-4. All of these evaluations are detailed in the Claude 4 system card.
The original commitment was (IIRC!) about defining the thresholds, not about mitigations. I didn’t notice ASL-4 when I briefly checked the RSP table of contents earlier today and I trusted the reporting on this from Obsolete. I apologized and retracted the take on LessWrong, but forgot I posted it here as well; want to apologize to everyone here, too, I was wrong.
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.
The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)
I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).
Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.
You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.
How do effectiveness estimates change if everyone saved dies in 10 years?
“Saving lives near the precipice”
Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?
[I’m highly uncertain about this, and I haven’t done much thinking or research]
For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.
It would be interesting to see how it changes as at least some estimates account for the world ending in n years.
Maybe one could start with updating GiveWell’s estimates: e.g., for DALYs, one would need to recalculate the values in GiveWell’s spreadsheets derived from the distributions that are capped or changed as a result of the world ending (e.g., life expectancy); for estimates of relative values of averting deaths at certain ages, one would need to estimate and subtract something representing that the deaths still come at (age+n). The second-order and long-term effects would also be different, but it’s possibly more time-consuming to estimate the impact there.
It seems like a potentially important question since many people have short AGI timelines in mind. So it might be worthwhile to research that area to give people the ability to weigh different estimates of charities’ impacts by their probabilities of an existential catastrophe.
Please let me know if someone already has worked this out or is working on this or if there’s some reason not to talk about this kind of thing, or if I’m wrong about something.
I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.
So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year
[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]
I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better.
(It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)
Is there a write up on why the “abundance and growth” cause area is an actually relatively efficient way to spend money (instead of a way for OpenPhil to be(come) friends with everyone who’s into abundance & growth)? (These are good things to work on, but seem many orders of magnitude worse than other ways to spend money.)
(The cited $14.4 of “social return” per $1 in the US seems incredibly unlikely to be comparable to the best GiveWell interventions or even GiveDirectly.)
Prior discussion here, especially a long comment by Alexander Berger. I copied over one notable quote from that below.
I'm not aware of any convincing public justification for spending monies in this area as a better choice than spending in traditional cause areas, but I also don't see evidence that abundance and growth is trying to compete with traditional EA cause areas for funding from the broader public.
I’ve been meaning to write a longer post about my concerns with this cause area, including the high levels of political risk it exposes the EA movement to, and why we should be wary of that post-FTX. For example, I think it was unwise to sponsor a conference which invited a guy championing ‘deportation abundance’.
And that’s not even the most controversial conference they sponsored this year (the author here has already formed an association with effective altruism, also thankfully didn’t notice who funded the conference).(I will get to the rest of Yarrow’s comment later but this was a bad memory of mine; I had read that Open Philanthropy were going to fund Abundance festival well before it happened, and assumed they had funded WelcomeFest due to sharing speakers and the abundance ideology).(I don’t have this critique fully formed enough to share it on the forum in much more detail)
Did Good Ventures or
Open PhilanthropyCoefficient Giving sponsor WelcomeFest? What was the other conference you're referring to?I did a brief search, and I couldn't find evidence of this. Are you sure you're getting that right? I don't know what other conference you're referring to, so I couldn't check that.
I also skimmed the list of grants here. I don't recognize most of the names, but nothing jumped out to me as looking like a conference.
[Edited on Nov. 20, 2025 at 3:50 AM Eastern to add:
To save the reader the suspense (or the effort of scrolling down), Coefficient Giving did not sponsor WelcomeFest, but did sponsor another conference, Abundance 2025, which, to me, appears harmless and inoffensive, inasmuch as anything contentious in American politics today can be.
Some invitees may have some harmful or offensive views, but that will be true of any U.S. conference about politics or policy where a diversity of viewpoints representative of the country are allowed.]
(Disclaimer that I'm Canadian, so you may feel free to discount or downweight my opinions on U.S. politics as you like. Canada is in an unusual situation with regard to the U.S., where everything in U.S. politics casts a long shadow over Canada, so Canadians are unusually keyed into events in U.S. politics.)[1] The unfortunate thing about U.S. politics, especially now, is that it's an ugly, messy business that involves doing deals and forming coalitions with people you'd rather not associate with, in a ideal world where you had the freedom to choose that kind of thing. Democrats have to do deals with Republicans. Democrats have to build a coalition strong enough to resist authoritarianism, illiberalism, and democratic backsliding that includes people far apart from each other on the political spectrum, who have meaningful, substantive, and sometimes bitter disagreements, who in many cases have serious, legitimate grievances with each other. It's unfortunate.
And, it should go without saying, to win elections going forward, Democrats have to win the votes of people who voted for Trump.
I think it's completely legitimate to level this sort of critique against Manifold's Manifest conference. First off, that's a conference mainly for the Bay Area rationalist community and somewhat for the Bay Area EA community, and not a conference about U.S. national politics. So, it's not about coalition building, winning over Republican voters, doing deals with Republican lawmakers, or anything like that.
Second, and more importantly, it's an entirely different matter to want no association with someone like Curtis Yarvin (who threw an afterparty for the Manifest conference). Yarvin says things I think the median Republican voter would find repugnant and crazy. I can't imagine the median Republican would have anything but rage or incredulity for the idea that America should become a "neo-monarchy". Yes, Republican voters have been surprisingly tolerant of Trump's illiberalism, and, yes, the Republican Party has both a recent problem with and a long history of racism, but people like Yarvin are still on the margins of the party, not at the median.
I think if
Open PhilanthropyCoefficient Giving or Good Ventures were funding something super controversial and alarming to a lot of people, like, I don't know, research into genetically engineering babies with enhanced abilities, then it would be incumbent on the effective altruist community to give some kind of response to that. In that hypothetical, it would be important to clarify to the public that the community is a separate entity from Dustin Moskovitz's and Cari Tuna's organizations, and to clarify that this community doesn't decide and can't control what they fund. However, that's not what is happening here.Coefficient Giving's work in this area is split into two parts, housing policy reform and metascience (or "innovation policy", as they put it, but I prefer metascience). Housing policy reform is a popular, liberal, centre-left, mainstream idea in U.S. politics. This summer, the California State Assembly passed two bills that enact exactly the sort of housing policy reform that Coefficient Giving is trying to support. These bills were popular among California voters. 74% of voters expressed support for the bills in a poll, with 14% against and 11% unsure. Governor Gavin Newsom, who played a key role in the passage of the housing policy reform bills, has a 54% approval rating among Californians, compared to a 26% approval rating for Trump.
You can agree or disagree with housing policy reform, but it's not a reputational risk for Coefficient Giving or for EA. It's popular. People like it. People like the politicians who champion it. And people especially like the results, which is increased housing affordability.[2]
What about the other half of Coefficient Giving's "Abundance & Growth" focus area, metascience? I can't imagine how metascience would pose reputational risks for anyone. Currently, metascience is not a partisan or polarized issue, and I pray it stays that way. The core idea of metascience is doing science on science: running experiments on different ways of doing science, particularly in terms of how research funding is allocated. Different institutions have different models for funding science. Compare, say, the NSF with DARPA. Nobody is saying the NSF should become like DARPA. What they are saying is that there should be experimentation with different funding models to find out what's most effective.
Here's a quote from Ezra Klein and Derek Thompson's book Abundance which explains just one of the reasons why proponents of metascience think there is probably room for improvement:
Bernie Sanders was recently asked about abundance liberalism in an interview with the New York Times. I think Sanders intended his response to be dismissive or critical, but he actually ended up acknowledging that Klein and Thompson are correct about their core argument. Sanders said:
I take that as a ringing endorsement from Bernie Sanders for abundance liberalism. That's actually one of the strongest endorsements of the Abundance thesis I've heard from any politician, possibly the strongest. Sanders is saying: what Klein and Thompson are arguing is so obviously correct, it's common sense.
It was intended as a criticism, I think, but Sanders was essentially saying: you couldn't be more wrong if you don't see the truth in Klein and Thompson's thesis about inefficient bureaucracy. If you don't realize this is a real, horrendous problem in government, well, clearly, you've never been a mayor or a governor.
Sanders is of course correct that the idea of good government, of housing affordability, of metascience, of public infrastructure like high-speed rail built on budget and on time (by in-house, government-employed engineers, rather than private contractors), etc.[3] is not a full political ideology. And abundance liberalism is not supposed to be a full political ideology. It's a set of ideas that is supposed to fit in within the context of American liberalism. A complement to other ideas, not a replacement.
Some people have levelled the critique at Klein and Thompson: but economic populist policies are more popular with voters in polls than abundance policies. Klein and Thompson's response: why not do both? They're compatible, and politicians should do what their voters want them to do. For example, there's no reason a city or a state can't make it much easier to build housing, both affordable housing and market-rate, and also increase the funding it puts toward affordable housing, or mandate housing developers to build a certain ratio of affordable housing to market-rate housing — as long as you make it easier for them to build housing in the first place. (Ezra Klein has specifically endorsed this idea.)
I think, as with many big ideas, abundance liberalism is a ball that many different people, sometimes with quite different political orientations from each other, want to take and run with in their own direction. Bernie Sanders' or Zohran Mamdani's version of abundance might take a different shape than, say, for a moderate Democratic governor of a purple state. That's normal. That's politics. (It's not perfect or ideal, but it's the world we live in, and the one we've got to work with.)
I'm not particularly bothered if conservatives like the one you quoted want to "troll the libs" by misapplying the term "abundance" to things like deportations — I mean, it annoys me, but it doesn't make me think abundance liberalism is a bad idea. Internet trolls always try to twist everything good and ruin it. (This is part of why I think Twitter is a waste of time, there's just so much deliberate provocation and trying to be edgy or attention-grabbing.) I don't know what conference you were referring to that he was invited to, [edit: it was Abundance 2025] but he works for a conservative policy think tank, and this gets back to my original point that policy conferences or political conferences will probably have to include people from across the political spectrum, from both major U.S. parties, like it or not.[4]
Abundance liberalism can, in theory, be taken in a direction that people like Ezra Klein and Derek Thompson, who coined the term, wouldn't like and would never endorse. But so what? Anything could, and people try to do that with almost everything. It's on us to be mindful and discerning. If we throw out every good idea in the world the second somebody tries to do something bad with it, we'll have no good ideas. I don't buy the idea that Coefficient Giving's association with abundance liberalism is a reputational risk for EA because a) it's popular (not just with voters, but with Democratic politicians from Gavin Newsom to Zohran Mamdani, and arguably even Bernie Sanders agrees with it in his own begrudging way), b) it's a good idea (e.g. look at measures of housing affordability in places that have reduced bureaucracy and made it easier to build),[2] and c) just because some people want to take it in a bad direction or tarnish its good name doesn't mean they'll succeed — they probably won't.
You don't have to agree that it's a good idea. You don't have to agree that it's as popular as I'm making out — although I'd invite you to look at the polling for the California housing bills. But I really don't see a plausible way this could be a reputational risk for EA. It's politics, and, yeah, politics is controversial, but this is very mainstream, acceptable politics, getting funded by a large philanthropic organization that the EA community doesn't control, which is currently in the process of broadening its donor base and its focus areas beyond effective altruism or what the EA community would choose to prioritize. What's the big whoop?
If you want to know my political orientation, I'm LGBT, I voted for the New Democratic Party (NDP) in the most recent Canadian federal election, I enjoyed the economist Thomas Piketty's book Capital in the Twenty-First Century, and I'm a big fan of Ezra Klein, so whatever that tells you.
In Minneapolis, Minnesota: "Using a synthetic control approach we find that the reform lowered housing cost growth in the five years following implementation: home prices were 16% to 34% lower, while rents were 17.5% to 34% lower relative to a counterfactual Minneapolis constructed from similar metro areas."
In Austin, Texas: "The median asking rent in Austin dropped 10.7% year over year to $1,420 in March — $379 below its record high. "
These are all examples taken from Ezra Klein and Derek Thompson's Abundance book. It's particularly important to note that they advocate for the government of California to employ its own engineers in-house — government employees, not private contractors — to complete its long-languishing high-speed rail project.
This is just one example of several strongly anti-neoliberal stances Klein and Thompson take in the book. Another example is their strong support of government science funding (see the chapter about metascience). A third example is their strong advocacy of industrial policy, particularly around sustainable energy. In addition to these specific anti-neoliberal stances, the book also includes a section explicitly criticizing neoliberalism.
I bring this up because one of the most common critiques of the book I've seen online is that it's "neoliberal". This is why you should read books, rather than read tweets about books from people who haven't read them. I largely believed these criticisms before I read the book and then was furious when, upon reading it, I found out I had been misled by people who didn't read the book.
I don't know if this analogy will help or hurt, but an analogy that makes sense in my head is falling birth rates. Falling birth rates is also a ball different people of different political persuasions can run with in different directions. From a feminist and welfare state/social democratic perspective, you can see falling birth rates — particularly in conjunction with people saying they want to have kids, but it's too difficult — and think about how the government can better support parents or prospective parents, particularly from the angle of gender equality. Women often say they want to have kids, but are daunted by taking on the additional care work and domestic work of parenting when they already have a career — which might be impacted by having a kid. This can be a concern for men, too, but unequally so, because of the unequal burden of parenting and domestic work that falls on women. What policies could conceivably improve this situation and allow women who want to have kids to do so? This is an incredibly liberal, progressive, social democratic perspective on the issue.
On the other hand, some conservatives have expressed strange ideas about how to address falling birth rates, like trying to make people more religious. Even assuming that people becoming more religious would make them have more kids, I don't know how you make people more religious. I especially don't know how you make them more religious not because God exists and you want them to have a good relationship with him, but because you want them to have more babies. In any case, this is an entirely opposite response to the feminist, pro-government response I outlined above.
Some liberals or people on the left argue that liberals/the left shouldn't even discuss declining birth rates because to do so is to automatically support regressive political responses, like an attempted to return to historical levels of religiosity or restrictions on abortion. I think this is incredibly misguided. Ignoring an issue that affects people's lives in a big way, or pretending that issue doesn't exist, is not an acceptable political response. That is a betrayal of the public, of the people, by politicians. That is also the kind of thing that loses politicians elections, and gives power to opposing politicians who have more regressive policy ideas, like banning abortion.
I’ll look at this properly later but just wanted to confirm that I got it wrong about WelcomeFest. I’d read a tweet about Open Philanthropy sponsoring Abundance 2025 around the same time WelcomeFest was happening, and conflated the two due to having similar speakers and an explicit pro-abundance position.
Okay, yes, Open Philanthropy is listed as one of the sponsors of the Abundance 2025 conference that took place in Washington, D.C. in September. Is this a problem for any reason? Was there anything about that conference that was troubling or controversial? What’s the reputational risk, here?
The ‘deportation abundance’ guy, Charles Lehman, was not merely associating abundance with deportations in a stray tweet—he was a speaker on a panel at Abundance 2025. He himself claims to be not associated with the abundance coalition.
(I’m not taking a position here on whether I think Abundance 2025 should have invited speakers it explicitly disagrees with, or whether my impression is that Abundance 2025 endorses or disendorses his views—just correcting you on that specific point)
Yes, I believed you when you said he was invited to a conference related to abundance. I was just saying he doesn’t represent abundance liberalism.
First, he’s a conservative, so he isn’t even a liberal in the first place. Second, you very helpfully linked to that book review where he says Klein and Thompson’s Abundance book is "fundamentally misguided" and that "a ‘politics of abundance’ is an oxymoron".
This confirms what I said above that this guy is just "trolling the libs" by intentionally misusing the word "abundance". This should not be a relevant consideration for whether
Open PhilanthropyCoefficient Giving wants to support policy reform related to abundance liberalism. But I think your point is just about sponsoring the conference.If you have political conferences or policy conferences where you invite conservatives and Republicans, it’s going to be pretty much impossible to avoid inviting people who have offensive or problematic views, since that is core to the Republican Party and mainstream American conservatism right now. I don’t see how associating with Republicans or conservatives in some way is avoidable if a philanthropic organization like
Open PhilanthropyCoefficient Giving wants to be involved in politics or policy. Everyone in politics/policy has to in some way, including Democratic lawmakers.And it doesn’t seem like there’s any good alternative.
I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.
FAIR, as in the Meta AI group? Which FAIR?
Sorry, I don't know where I got that R from.
Okay, thanks, so FAI — the Foundation for American Innovation. What's the relation between FAI and
Open PhilanthropyCoefficient Giving? Has Coefficient Giving given grant money to FAI?Oh, you must just be referring to the fact that FAI "co-hosted" the Abundance 2025 conference. I actually have no idea what the list of "co-hosts" on the website means — there are 15 of them. I have no context for what this means.
Yes.
You disapprove even of those grants related to AI safety?
For me, it's all very theoretical because AI capabilities currently aren't very consequential for good or for ill, and the returns to scaling compute and data seem to be very much in decline. So, I don't buy that either immediate-term, mundane AI safety or near-term AI x-risk is a particularly serious concern.
There are some immediate-term, mundane concerns with how chatbots talk to users with certain kinds of mental health problems, and things of that nature, but these are comparatively small problems in the grand scheme of things. Social media is probably 10x to 1,000x more problematic.
Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.
Uh huh, you got me on a technicality. Let me clarify that I see the social problems associated with social media, including the ML-based recommender systems they use, as far more consequential than the social problems associated with LLM-based chatbots.
The recommender systems are one part of why social media is problematic, but not nearly the whole story.
I think looking at the problems of social media through the lens of "AI safety" would be too limiting and not helpful.
I suspect that part of the theory of impact here might not run through any individual grant item (ie, liberalized zoning laws leading to economic growth through increased housing construction in some particular city), but rather through a variety of bigger-picture considerations that look something like:
People have wondered for a long time if, in addition to direct work on x-risks, one should consider intermediate "existential risk-factors" like great power war. It seems plausible to me that "trying to make the United States more sane" is a pretty big factor in many valuable goals -- global health & development, existential risk mitigation, flourishing long-term futures, and so forth.
Hm. Interesting. I didn't know this was an Open Philanthropy focus area. Webpage here.
I read the book by Abundance by Ezra Klein and Derek Thompson earlier this year and loved it. It's one of my favourite non-fiction books I've read recently. (Since then, other people have taken up the "Abundance" label, but I haven't kept track of who are they are, how similar/different their views are to Klein and Thompson's in the book, or whether I agree with them.)
I wouldn't say Open Phil's "Abundance & Growth" focus area is necessarily many orders of magnitude worse than global health/global poverty or conventional global catastrophic risks like pandemics. (Whether you think AGI-based global catastrophic risks are many orders of magnitude more cost-effective to focus on than "Abundance & Growth" depends on disputed assumptions I almost certainly strongly disagree with you about.)
The two parts of the "Abundance & Growth" focus area are currently housing policy reform, i.e. YIMBYism, and innovation policy, which seems closely related to metascience, about which there is a chapter in Klein and Thompson's Abundance book.
Housing policy reform is intrinsically very important. It's also important because of what it means for U.S. politics. Democrats need to get a handle on all aspects of affordability, especially housing affordability. The Trump administration's and Republican Party's scary tilt toward illiberalism and authoritarianism needs strong challengers. Housing affordability in particular and affordability in general is a reason Democrats aren't more popular than they are, and a reason they haven't been able to mount as strong a challenge to Trump's illiberal/authoritarian tactics as I wish they could have so far. Much not only in the U.S. but around the world depends on whether the U.S. stays a full liberal democracy. The United States has dropped considerably in comparative assessments of countries' level of freedom or democracy. This worries me, and although the effects are hard to quantify rigorously, obviously they are huge. USAID was one of the first casualties of Trump's current administration.
Metascience and innovation policy seem highly uncertain, but also extremely worth trying. The metascience chapter in the Abundance book was probably the most exciting. If the speed of progress in science and technology can be significantly increased by policy reform or institutional reform, or by creating new institutions, then the benefits are also hard to quantify rigorously but also surely must be huge.
So, overall, I think I tentatively support Open Philanthropy getting into these two areas. It, of course, depends on what exactly they're doing, though.
At the beginning of November, I learned about a startup called Red Queen Bio, that automates the development of viruses and related lab equipment. They work together with OpenAI, and OpenAI is their lead investor.
On November 13, they publicly announced their launch. On November 15, I saw that and made a tweet about it: Automated virus-producing equipment is insane. Especially if OpenAI, of all companies, has access to it. (The tweet got 1.8k likes and 497k views.)
In the tweet, I said that there is, potentially, literally a startup, funded by and collaborating with OpenAI, with equipment capable of printing arbitrary RNA sequences, potentially including viruses that could infect humans, connected to the internet or managed by AI systems.
I asked whether we trust OpenAI to have access to this kind of equipment, and said that I’m not sure what to hope for here, except government intervention.
The only inaccuracy that was pointed out to me was that I mentioned that they were working on phages, and they denied working on phages specifically.
At the same time, people close to Red Queen Bio publicly confirmed the equipment they’re automating would be capable of producing viruses (saying that this equipment is a normal thing to have in a bio lab and not too expensive).
A few days later, Hannu Rajaniemi, a Red Queen Bio co-founder and fiction author, responded to me in a quote tweetand in comments:
They did not answer any of the explicitly asked questions, which I repeated several times:
It seems pretty bad that this startup is not being transparent about their equipment and the level of possible automation. It’s unclear whether they’re doing gain-of-function research. It’s unclear what security measures they have or are going to have in place.
I would really prefer for AIs, and for OpenAI (known for prioritizing convenience over security)’s models especially, to not have ready access to equipment that can synthesize viruses or software that can aid virus development.
My instantaneous, knee-jerk reaction (so take it with a grain of salt) is that the Red Queen Bio co-founder’s responses are satisfactory and reassuring. Your concerns are based on an unsourced rumour and speculation, which are always in unlimited supply and don’t warrant a response from a company in every case.
You also don’t seem to be updating rationally on the responses you are receiving, but just doubling-down on your original hunch, which by now seems like it’s probably false.
Not all tweets merit a response, so it doesn’t matter whether they continue to answer your questions or not.
Horizon Institute for Public Service is not x-risk-pilled
Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled.
(Edit: some people reached out to me to say that they've had different experiences with a minority of Horizon people.)
"Is Horizon x-risk pilled?" feels like a misguided question. The organization doesn't claim to be, and it would also be problematic if the organization were acting in an x-risk-pilled-way but but deceitful about it. I'm certainly confident that some Horizon people/fellows are personally x-risk-pilled, and some are not.
For x-risk-focused donors, I think the more reasonable question is: How much should we expect 'expertise and aptitude around emerging tech policy' (as Horizon interprets it) to correlate with the outcomes those donors care about? One could reasonably conclude that that correlation's low or even negative. But it's also not like there's a viable counterfactual 'X-risk-pilled Institute for Public Service' that would achieve a similar level of success at placing fellows.
(I'd guess you might directionally agree with this and just think the correlation isn't that high, but figured I'd comment to at least add the nuance).
Relatedly, @MichaelDickens shallow-reviewed Horizon just under a year ago—see here.[1] Tl;dr: Michael finds that Horizon’s work isn’t very relevant to x-risk reduction; Michael believes Horizon is net-negative for the world (credence: 55%).
(On the other hand, it was Eth, Perez and Greenblatt—i.e., people whose judgement I respect—who recommended donating to Horizon in that post Mikhail originally commented on. So, I overall feel unsure about what to think.)
See also ensuing discussion here.
I've seen a number of people I respect recommend Horizon, but I've never seen any of them articulate a compelling reason why they like it. For example in that comment you linked in the footnote, I found the response pretty unpersuasive (which is what I said in my follow-up comment, which got no reply). Absence of evidence is evidence of absence, but I have to weigh that against the fact that so many people seem to like Horizon.
A couple weeks ago I tried reaching out to Horizon to see if they could clear things up, but they haven't responded. Although even if they did respond, I made it apparent that the answer I'm looking for is "yes Horizon is x-risk-pilled", and I'm sure they could give that answer even if it's not true.
I do not believe Anthropic as a company has a coherent and defensible view on policy. It is known that they said words they didn't hold while hiring people (and they claim to have good internal reasons for changing their minds, but people did work for them because of impressions that Anthropic made but decided not to hold). It is known among policy circles that Anthropic's lobbyists are similar to OpenAI's.
From Jack Clark, a billionaire co-founder of Anthropic and its chief of policy, today:
Dario is talking about countries of geniuses in datacenters in the context of competition with China and a 10-25% chance that everyone will literally die, while Jack Clark is basically saying, "But what if we're wrong about betting on short AI timelines? Security measures and pre-deployment testing will be very annoying, and we might regret them. We'll have slower technological progress!"
This is not invalid in isolation, but Anthropic is a company that was built on the idea of not fueling the race.
Do you know what would stop the race? Getting policymakers to clearly understand the threat models that many of Anthropic's employees share.
It's ridiculous and insane that, instead, Anthropic is arguing against regulation because it might slow down technological progress.
Hi Mikhael, could you clarify what this means? “It is known that they said words they didn't hold while hiring people”
I think the context of the Jack Clarke quote matters:
In context Jack Clark seems to be arguing that he should be considering short timeline, 'regretful actions' more seriously.
In RSP, Anthropic committed to define ASL-4 by the time they reach ASL-3.
With Claude 4 released today, they have reached ASL-3. They haven’t yet defined ASL-4.
Turns out, they have quietly walked back on the commitment. The change happened less than two months ago and, to my knowledge, was not announced on LW or other visible places unlike other important changes to the RSP. It’s also not in the changelog on their website; in the description of the relevant update, they say they added a new commitment but don’t mention removing this one.
Anthropic’s behavior is not at all the behavior of a responsible AI company. Trained a new model that reaches ASL-3 before you can define ASL-4? No problem, update the RSP so that you no longer have to, and basically don’t tell anyone. (Did anyone not working for Anthropic know the change happened?)
When their commitments go against their commercial interests, we can’t trust their commitments.
You should not work at Anthropic on AI capabilities.
This is false. Our ASL-4 thresholds are clearly specified in the current RSP—see "CBRN-4" and "AI R&D-4". We evaluated Claude Opus 4 for both of these thresholds prior to release and found that the model was not ASL-4. All of these evaluations are detailed in the Claude 4 system card.
The thresholds are pretty meaningless without at least a high-level standard, no?
The RSP specifies that CBRN-4 and AI R&D-5 both require ASL-4 security. Where is ASL-4 itself defined?
The original commitment was (IIRC!) about defining the thresholds, not about mitigations. I didn’t notice ASL-4 when I briefly checked the RSP table of contents earlier today and I trusted the reporting on this from Obsolete. I apologized and retracted the take on LessWrong, but forgot I posted it here as well; want to apologize to everyone here, too, I was wrong.
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.
The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)
I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).
Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.
You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.
How do effectiveness estimates change if everyone saved dies in 10 years?
“Saving lives near the precipice”
Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?
[I’m highly uncertain about this, and I haven’t done much thinking or research]
For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.
It would be interesting to see how it changes as at least some estimates account for the world ending in n years.
Maybe one could start with updating GiveWell’s estimates: e.g., for DALYs, one would need to recalculate the values in GiveWell’s spreadsheets derived from the distributions that are capped or changed as a result of the world ending (e.g., life expectancy); for estimates of relative values of averting deaths at certain ages, one would need to estimate and subtract something representing that the deaths still come at (age+n). The second-order and long-term effects would also be different, but it’s possibly more time-consuming to estimate the impact there.
It seems like a potentially important question since many people have short AGI timelines in mind. So it might be worthwhile to research that area to give people the ability to weigh different estimates of charities’ impacts by their probabilities of an existential catastrophe.
Please let me know if someone already has worked this out or is working on this or if there’s some reason not to talk about this kind of thing, or if I’m wrong about something.
I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.
So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year
[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]
I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better.
(It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)
Wrote a post: https://forum.effectivealtruism.org/posts/hz2Q8GgZ28YKLazGb/saving-lives-near-the-precipice-we-re-doing-it-wrong