Hi there!
I currently lead the biosecurity grantmaking program at Effective Giving. Before that, I've worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Thanks for sharing this and congrats on a very longstanding research effort!
Are you able to provide more details on the backgrounds of the “biorisk experts”? For example, the kinds of organisations they work for, their seniority (eg years of professional experience), or their prior engagement with global catastrophic biological risks specifically (as opposed to pandemic preparedness or biorisk management more broadly).
I ask because I’m wondering about potential selection effects with respect to level of concern about catastrophe/extinction from biology. Without knowing your sampling method, I could imagine that you could potentially disproportionately have reached people who worry more about catastrophic and extinction risks than the typical “biorisk expert.”
Hi!
This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR's work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.
To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!
Yes, benchtop devices have significant ramifications!
But that's consistent with my comment, which just meant to emphasise that I don't read Diggans and Leproust as advocating for a fully "public" hazard database, as slg's comment could be read to imply.
Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.
One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly lower risks for misuse than truly public access. Relevant quote:
“Sustained funding and commitment will be required to build and maintain a database of risk-associated sequences, their known mechanisms of pathogenicity and the biological contexts in which these mechanisms can cause harm. This database (or at a minimum a screening capability making use of this database), to have maximum impact on global DNA synthesis screening, must be available to both domestic and international providers.”
Also worth noting the parenthetical about having providers use a screening mechanism with access to the database without having such direct access themselves, which seems like a nod to some of the features in, eg, SecureDNA’s approach.
Hi Nadia, thanks for writing this post! It's a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.
Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it's frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing to the overall opacity in the field causing some of these epistemic problems, and I'm a bit more hopeful about other ways of reducing that opacity. For example, if the field had more open discussions about things that are not very infohazardous (e.g., comparing strategies for pursuing well-defined goals, such as maintaining the norm against biological weapons), I suspect it'd mitigate the consequences of not being able to discuss certain topics (e.g. detailed threat models) openly. Of course, that just raises the question of what is and isn't an infohazard (which itself may be infohazardous...), but I do think there are some areas where we could pretty safety move in the direction of more transparency.
I can't speak for other organisations, but I think my organisation (Effective Giving, where I lead the biosecurity grantmaking program) could do a lot to be more transparent just by overcoming obstacles to transparency that are unrelated to infohazards. These include the (time) costs of disseminating information; concerns about how transparency might affect certain key relationships, e.g. with prospective donors whom we might advise in the future; and public relations considerations more generally; and they're definitely very real obstacles, but they generally seem more tractable than the infohazard issue.
I think we (again, just speaking for Effective Giving's biosecurity program) have a long way to go, and I'd personally be quite disappointed if we didn't manage to move in the direction of sharing more of our work during my tenure. This post was a good reminder of that, so thanks again for writing it!
Thanks for doing this survey and sharing the results, super interesting!
Regarding
maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something
Yes, I definitely think that there's a lot of potential for social desirability bias here! And I think this can happen even if the responses are anonymous, as people might avoid the cognitive dissonance that comes with admitting to "not having an inside view." One might even go as far as framing the results as "Who do people claim to defer to?"
Hi Elika,
Thanks for writing this, great stuff!
I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.
I have a personal anecdote that really speaks to your "do your homework point." When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing "dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, and dissemination of research." The idea that effective dual-use oversight requires intervention at multiple points felt solid, and some feedback we'd gotten on presentations of our work gave me the impression that this was a fairly novel framing.
It totally wasn't! NSABB called for this kind of oversight throughout the research cycle (at least) as early as 2007, [1] and, in hindsight, it was pretty naïve of me to think that this simple idea was new. In general, it's been a pretty humbling experience to read more of the literature and realise just how many of the arguments that I thought were novel based on their appearance in recent op-eds and tweets can be found in discussions from 10, 20, or even 50 years ago.
Alright, one element of your post that I would've framed differently: You put a lot of emphasis on the instrumental benefits of nuanced discussion in the form of building trust and credibility, but I hope readers of your post also realise the intrinsic value of being more nuanced.
E.g., from the summary
"[what you say] does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact"
And the very last sentence:
"Always make sure ‘you’re invited back to the table’."
This is a great point, and I really do think it's possible to burn bridges and lose respect by coming across as ignorant or inflammatory. But getting the nuanced details wrong is also a recipe for getting solutions wrong! As you say, proper risk-benefit analysis for concrete dual-use research is almost always difficult, given that the research in question very often has some plausible upside for pandemic preparedness or health more generally.
And even if you know what specific research to draw red lines around, implementation is riddled with challenges: How do you design rules that won't be obsolete with scientific advances? How do you make criteria that won't restrict research that you didn't intend to restrict? How do you avoid inadvertent attention hazards from highlighting the exact kinds of research that seem the most risky? Let's say you've defined the perfect rules. Who should be empowered to make the tough judgment calls on what to prohibit? If you're limiting access to certain knowledge, who gets to have that access? And so on, and so on.
I do think there's value in strongly advocating for more robust dual-use oversight or lab biosafety, and (barring infohazard concerns), I think op-eds aimed at both policymakers and the general public can be helpful. It's just that I think such advocacy should be more in the tone of "Biosecurity is important, and more work on it is urgently needed" and less "Biosecurity Is Simple, I Would Just Ban All GOF."
Bottom line, I especially like the parts of your post that encourage people to be more nuanced, not just sound more nuanced.
From Casadevall 2015: "In addition to defining the type of research that should elicit heightened concern, the NSABB recommended that research be examined for DURC potential throughout its life span, from experimental conception to final dissemination of the results."
This is a very welcome contribution to a professional field (ie., the GCBR-focused parts of the pandemic preparedness and biosecurity space) that can often feel opaque and poorly coordinated — sincere thanks to Max and everyone else who helped make it!