CK

Conrad K.

237 karmaJoined

Comments
8

Thank you so much for flagging this! Very much agreed this is an important correction; the update that the US doesn't dominate the biosecurity spend this way is indeed important and I think a welcome one. Will certainly amend.

Thanks for the tag @OscarD, this is awesome! I'd basically hoped to build this but then additionally convert incidence at detection to some measure of expected value based on the detection architecture (e.g. as economic gains or QALYs). Something way too ambitious for me at the time haha, but I am still thinking about this.

I definitely want to play with this in way more detail and look into how it's coded, will try and get back with hopefully helpful feedback here.

Yeah, great question! Lots of these categories were things I thought but ultimately had difficulties getting good estimates, so I don't have good answers here. But I can say a little bit more about what my impressions were for each.

1. AI-misuse is tough because I think lots of the work here is bucketed (usually implicitly) into AI safety spending which I wasn't looking at. Although I will say I struggled to find work at least explicitly focused on AI-bio that wasn't EA (usually OP) funded (e.g. RAND, CLTR). I think in turn, I capture a lot of this in my "GCBR Priority Areas" bucket. So at least as far as work that identifies as attempting to tackle this problem it's some % of this bucket (i.e. probably in the $1m-$10m order of magnitude, could be a fair bit less), but I don't think this reflects how much total is going towards biorisk from AI, which is much harder to get data on.

2. Yeah synthesis screening I definitely implicitly bucket into my "GCBR Priority Areas" category. I didn't attempt to break these down further because it'd be so much more work, though here's some thoughts:

Synthesis screening is hard to get data on because I couldn't find out how the International Gene Synthesis Consortium is funded, and I think historically this represents most of the biosecurity and pandemic prevention work here. My best guess (but like barely more than 50% confidence) is that the firms who form it directly bear the costs. If true, then the work I could find outside this space that is philanthropically funded is NTI | bio / IBBIS and SecureDNA/MIT Media Lab. NTI spent ~$4.8m on their bio programs and has received OP funding. MIT Media Lab have received a number of OP grants, and SecureDNA list OP as their only philanthropic collaborator. This means the spend per year is probably in the $1m-$10m order of magnitude, most of which comes from EA. Though yes the IGSC remains a big uncertainty of mine.

2. I think breaking down disease surveillance into pathogen-agnostic early detection, 'broad-spectrum', pathogen-specific, and GCBR-specific work is pretty tough, mostly because lots of the funded work is the same across these (e.g. improving epidemiological modelling capabilities; improving bioinformatics databases; developing sequencing technology) - and there is a lot of work on all of the above. Certainly, when it comes to funders or projects identifying as being focused on GCBRs (including the 'stealth' vs ' wildfire' terminology), I could not find anything that wasn't affiliated with EA at all, which places an upper-bound at like 4-5% of the early detection spend. But for the reasons I've stated I think this is a very poor estimate of how much money actually contributes towards GCBRs and have no good numbers.

3. Outside of government funding/government-funded sources like univerisites, I could find no non-EA funding on P4E/resilience work. My impression is that EA represents most of the far-UVC work and thinking about pandemic-proof PPE (given that there's quite clearly attributable work to EA-aligned/EA-funded orgs like SecureBio, Gryphon, and Amodo and little work outside). But I think this is much shakier when it comes to resilience and backup plans, which would come under a lot more general resilience work. That I'm just way less sure.

4. Medical countermeasures are very similar to disease surveillance - the "therapeutics" category ideally captures these (including the development of platform technologies at places like CEPI), but delineating between GCBR-oriented countermeasures was both pretty difficult and, I think, effectively unhelpful. Lots of pathogen-agnostic work here isn't even done for pandemic-related reasons (e.g. building tools against neglected tropical diseases). Work that identifies as being focused on GCBRs, however, is essentially EA. So, whilst I think we can apply the same heuristic as for GCBR-specific early detection (at most 4-5% of the funding here), I'd be even less confident about these estimates representing the actual contribution towards GCBRs.

Hopefully this is useful!

I'm not hugely confident, but yes >80% of GCBR-specific funding from within philanthropy being from EA seems right to me.

It's generally quite hard to find GCBR-specific work outside EA that aren't from policy institutions such as Nuclear Threat Initiative, the Bipartisan Commission for Biodefense, or the Centre for Health Security - all of whom, as far as I can tell, are recipients of Open Philanthropy funds. Other work here just seem much more EA-aligned (e.g. CSER, CLTR, FLI). 

Additionally, it seems likely that even insofar as these institutions care about GCBRs, EA (particularly Open Philanthropy) has been at least somewhat influential in driving this. Certainly, of the foundations that fund the Centre for Health Security, Open Philanthropy is the only one with an express mandate for GCBRs. Only Bill & Melinda Gates and Rockefeller Foundations are larger foundations than OP here, noting though I couldn't find much evidence of Bill and Melinda Gates money directly going towards CHS, and the Rockefeller funding I could find was on supporting work on the COVID-19 response. Most of the other foundations are just smaller. The Bipartisan Commission for Biodefense's most GCBR-relevant work (particularly the Apollo Program for Biodefense and other work in 2021) was produced around the same time they received a series of grants from Open Philanthropy.

I'd imagine most GCBR-specific funding probably comes from the government (the US government in particular). But yes, as far as I can tell, EA probably represents 80+% of philanthropic funding towards GCBRs.

Thank you! Yes I've been in touch with Christian Ruhl :)

Appreciate the kind words!

I think I'd push back somewhat although my wording was definitely sloppy.

I think it's worth establishing my frame here because I reckon I'm not taking neglectedness in a more conventional sense along the lines of "how much biorisk reduction is on the plate?". I generally think it's quite hard to make judgements about neglectedness in this way in bio for two main reasons: firstly, many interventions in bio are only applicable to a particular subset of threat models and pathogen characteristics and can be hugely sensitive to geographic/local context amongst other contingencies. Secondly, there are no great models (I could find!) of the distribution of threats by threat models and pathogen characteristics. So when I'm talking about neglectedness, I think I mean something more like "how many plausible combinations of threat models, pathogen characteristics, and other contingencies are being missed". 

"My view is that many players and funding sources means that fewer important funding opportunities will be missed"

So I think this could turn out to be right empirically, but it's not trivially true in this instance if most funders centre on a narrow subset (e.g. naturally emergent pandemics; respiratory transmission; flu-like illness); EAs focus on quite specific scenarios (e.g. genetically-engineered pandemics; respiratory transmission; high case-fatality rates), but then this leaves a number of possibilities that could contribute towards reducing threats from GCBRs that other funders could be interested in. For example, smallpox; antimicrobial resistant strains of various diseases; or even genetically-engineered diseases that might not directly be GCBRs. I think a key assumption here is that work on these can be doubly relevant or have spillover effects even for models that are more GCBR-specific. Hence why, I conclude that many opportunities "could" be missed: the failure mode looks like a bioinformatics company working on the attribution of genetically engineered pathogens and neglecting funding from the much more well-funded antimicrobial resistance landscape, even if there's a lot of overlap in methods and the extra resources could drive forward work on both. 

"I was struck by how little philanthropy has been directed towards tech development for biosecurity, mitigating GCBRs, and policy advocacy for a range of topics from regulating dual-use research of concern (DURC) to mitigating risks from bioweapons."

This is definitely poor wording, poor grammar, and an important omission on my part, haha. What I want to stress though is "tech development for a range of topics" / "mitigating GCBRs for a range of topics", and by "for a range of topics" I want to refer to a particular subset of misuse-based concerns that vaccine R&D, health system readiness, and pathogenesis research are less applicable to. A naive example of this would just be "wildfire" cases with unforeseen transmissibility and case fatality such that countermeasures or general health systems strengthening would probably be less effective here than focusing on prevention.

My surprise ultimately comes from the fact that I think both in EA and outside EA - admittedly noting I don't have lots of experience in either - people do internalise the sheer heterogeneity here. I don't think levels of funding/concern have ever really well-tracked what threats we should be most concerned about. But in turn, I guess I was taken aback to still see these gaps (and hopefully opportunities!) on both ends.

Really appreciate you picking this up! Edit access would be great - I've sent a request and will then incorporate the necessary fixes.