I know of many historical bioinfohazard scenarios where efforts to selectively disclose research findings were unsuccessful.
For example, there was a huge controversy in 2011 when researchers made a strain of avian influenza (which is super deadly… to birds) that could spread between ferrets (not birds! concerning!). This was going to be published in Science, and NSABB (the US government body set up to keep an eye on this sort of thing) recommended that the paper’s methods be redacted, with some scheme to forward the full details to researchers who needed them. After three months (and a fair bit of brouhaha[1]) the NSABB reversed their position and the research was published in full.
My goal here isn’t to argue about whether that final decision was correct[2]. This is just not an example that we can call a success along the dimension of trying to selectively disclose research.
Are there cases where selective disclosure was successful? I know of two, from the book Biosecurity Dilemmas. I quote the book’s citations for these stories below.
Partially-redacted publication of a novel botulinum toxin in 2013
In this case, scientists who worked in a public health lab diagnosing infant botulism found a strain that they couldn’t neutralize. From Why Scientists Held Back Details on a Unique Botulinum Toxin[3]:
Scientists have discovered the first new form of botulinum toxin in over 40 years, but they're taking the unusual step of keeping key details about it secret.
…
The researchers published two reports describing their work online in The Journal of Infectious Diseases. The information in those reports is deliberately incomplete, to prevent anyone from using it as the recipe for a potent new bioweapon.
...
Normally, the journal would require that the scientists disclose the genetic sequences needed to make the toxin. In this case, however, the researchers didn't want to do that because of the security risk. The journal's editors ultimately agreed that they could go ahead and publish but withhold the information until new treatments were developed.
The researchers didn’t want to disclose in full, and they succeeded.
While the selective disclosure was successful, it may not have been the right strategy overall, since other researchers found novel toxin was susceptible to a standard antitoxin in 2013. Quoting the discussion of this case in Information Hazards in Biotechnology a 2018 Risk Analysis paper by Lewis et al.:
At first glance, this is an example of scientists attempting to act responsibly and reduce the overall biorisk posed by botulinum toxin by avoiding an information hazard. A secondary impact of this decision, however, may have increased risk. ... By restricting access to this toxin and the sequence used to create it, the number of research groups able to work on developing appropriate medical countermeasures was also severely restricted. ... When strains of the toxin-producing organism were shared with other labs, their assessment of both the difference in sequence and the efficacy of antitoxins contradicted the earlier findings (Maslanka et al., 2016). This suggests that being overcautious with information hazards can also complicate effective risk assessment.
Non-publication and selective disclosure of a method for barcoding lab strains in 2001
Of course, there could be vast multitudes of life scientists discovering problematic things and then disclosing them very selectively. How would we know?
One example of this that we do know about (or at least… that I know about from Biosecurity Dilemmas) involves researchers deciding not to publish, and instead circulating a whitepaper to some government agencies. (The quote below includes a bonus “let’s just toss that novel pathogen in the autoclave” anecdote that wasn’t mentioned in the book).
From Hey, You've Got to Hide Your Work Away by David Malakoff in Science, October 2013:
Not long after letters laced with anthrax spores killed five Americans in September 2001, a research team led by genome scientist Harold "Skip" Garner came up with an idea for probing such crimes. But the solution gave him pause. During a study that used new gene technologies to analyze several of the world's deadliest pathogens, the researchers realized that a unique genetic "barcode" could be discreetly inserted into laboratory strains, potentially enabling forensic scientists to track a bioweapon or escaped pathogen back to its source. It was just the kind of tagging that could have helped investigators identify the source of the weaponized anthrax spores tucked into the deadly letters, says Garner, now with the Virginia Bioinformatics Institute at the Virginia Polytechnic Institute and State University in Blacksburg. But publishing the trick might also aid evildoers, he realized. "It was information that might be misused, to figure out how to evade detection," Garner recalled recently. "We had to ask: 'Is it wise to widely share this?'"
...
A dearth of worrisome manuscripts doesn't mean people aren't making worrisome discoveries; researchers may simply be sitting on sensitive results. In a paper to be published later this year by the Saint Louis University Journal of Health Law & Policy, David Franz, former commander of the U.S. Army Medical Research Institute of Infectious Diseases in Frederick, Maryland, recalls that, in the 1990s, scientists there unintentionally created a virus strain that was resistant to a potential treatment. After a discussion, "we decided to put the entire experiment into the autoclave," Franz tells Science. "That was it. We didn't hear anyone say: 'Wow, we could get a paper in Science or Nature.' "
Garner took a similarly cautious approach with his barcoding technology. "We wrote up a white paper for some of the government agencies, but didn't distribute it widely," he says. "Seemed better that way."
That seems like successful selective disclosure. I don’t know if it was “better that way” overall. Were the government agencies able to use the technique, or is it just buried in someone’s inbox? I have no idea, but my uncertainty about this is why I’ve been saying “selective disclosure” rather than “responsible disclosure” throughout this post.
Do you know of other examples?
I’d be especially interested in successful examples where you also think the decision to selectively disclose was correct!
I know the medical device hacking community has done some work on responsible disclosure (cf this 2020 DEF CON Biohacking Village panel) but the situation of a hacker trying to disclose a vulnerability in a company’s product seems fairly different from that of a researcher trying to disclose a vulnerability in the world.
For a more complete (and more entertaining) summary of this case, I recommend the Medium post What the AI Community Can Learn From Sneezing Ferrets and a Mutant Virus Debate. ↩︎
The researchers involved definitely thought it was important to publish their findings in full! The benefits described for this research included convincing governments that H5N1 could cause future pandemics and governments should stockpile vaccines and (in my opinion rather speculative) improvements to international surveillance of influenza strains. ↩︎
An October 2013 NPR News piece by Neil Greenfieldboyce. This was the citation in the book, don't @ me. ↩︎
First off I want to say thanks for your Forum contributions, Tessa. I'm consistently upvoting your comments, and appreciate the Wiki contributions as well.
I'm pretty confident in information hazards as a concern that are/will be plausibly important, but in these cases and other cases I tend to be at least strongly tempted by openness, which does seem to make it harder to advocate for responsible disclosure. "You should strongly consider selectively disclosing dangerous information, only all of these contentious examples I think should be open."
Aw, it's always really nice to hear that people are enjoying the words I fling out onto the internet!
Often both the benefits and risks of a given bit of research are pretty speculative, so evaluation of specific cases depends on one's underlying beliefs about potential gains from openness and potential harms from new life sciences insights. My hope is that there are opportunities to limit the risks of disclosure while still getting the benefits of openness, which is why I want to sketch out some of the selective-disclosure landscape between "full secrecy by default" (paranoid?) and "full openness by default" (reckless?).
If you're like to read a strong argument against openness in one particular contentious case, I recommend Gregory Koblentz's 2018 paper A Critical Analysis of the Scientific and Commercial Rationales for the De Novo Synthesis of Horsepox Virus. From the paper: