I support both ideas - adding/expanding the UVGI page and starting discussion of a new far-UVC page on wikitalk. There is already one mention of starting a separate page on wikitalk.
I think it's warranted given the distinct biophysical effects of this spectral band as well as the development of far-UVC as a commercial technology and area of broader interest.
Thanks! This is helpful because it clarifies a few areas where we disagree.
If a bioterrorist is already capable of understanding and actually carrying out the detailed instructions in an article like this, then I'm not sure that an LLM would add that much to his capacities.
I think future LLMs will likely still be very helpful for such people since there are more steps to being an effective bioterrorist than just understanding, eg existing reverse genetics protocols. I don't want to say much more on that point. That said, I'm personally less concerned about LLMs enhancing the capabilities of people who are already experts in some of these domains versus enhancing the ability of non-experts.
Conversely, handing a detailed set of instructions like that to the average person poses virtually no risk, because they wouldn't have the knowledge or abilty to actually do anything with it.
I disagree. I think future LLMs will enhance the ability of average people to do something with biology. I expect LLMs will get much better at generating protocols, recommending upskilling strategies, providing lab tutorials, interpreting experimental results, etc etc. And it will do all of those things in a much more accessible manner. Also, keep in mind Fig 1 in our paper shows that there is more than one path to obtain 1918 virus.
I also think there is an underappreciated point here about LLMs making it more likely for people to attempt bioterrorism in the first place. If a malicious actor looking to cause mass harm spends a couple of hours in conversation with an uncensored LLM, and learns that biology is a feasible path towards doing that... then I expect more people to try – even if it takes significant time and money.
There are much easier and simpler ways that are already widely discoverable: 1) Make chlorine gas by mixing bleach and ammonia (or vinegar); 2) Make sarin gas via instructions that were easily findable in this 1995 article:
These examples indeed constitute nasty ways to cause harm to people and sound significantly easier. However, the scale of harm you can cause with infectious or otherwise exponential biology is significantly beyond that of targeted CW attacks. The potential harm is such that the statement "hardly anyone wants to carry out such attacks" doesn't seem a sufficient reason not to be concerned.
Hi Stuart,
Thanks for your feedback on the paper. I was one of the authors, and I wanted to emphasize a few points.
The central claim of the paper is not that current open-source models like Llama-2 enable those looking to obtain bioweapons more than traditional search engines or even print text. While I think this is likely true given how helpful the models were for planning and assessing feasibility, they can also mislead users and hallucinate key details. I myself am quite uncertain about how these trade off against e.g. using Google – you can bet on that very question here. Doing a controlled study like the one RAND is running could help address this question.
Instead, we are much more concerned about the capabilities of future models. As LLMs improve, they will offer more streamlined access to knowledge than traditional search. I think this is already apparent in the fact that people routinely use LLMs for information they could have obtained online or in print. Weaknesses in current LLMs, like hallucinating facts, are priority issues for AI companies to solve, and I feel pretty confident we will see a lot of progress in this area.
Nevertheless, based on the response to the paper, it’s apparent that we didn’t communicate the distinction between current and future models enough, and we’re making revisions to address this.
The paper argues that because future LLMs will be much more capable and because existing safeguards can be easily removed, we need to worry about this issue now. That includes thinking of policies that incentivize AI companies to develop safe AI models that cannot be tuned to remove safeguards. The nice thing with catastrophe insurance is that if robust evals (much more work to do in this area) demonstrate that an open-source LLM is safe, then coverage will be far cheaper. That said, we still have a lot more work to do to understand how regulation can effectively limit the risks of open-source AI models, partly because the issue of model weight proliferation has been so neglected.
I’m curious about your thoughts on some of the below questions since I think they are at the crux of figuring out where we agree/disagree.
Thanks again for your input!
I am starting to worry that the possibility of Russia using conventional or perhaps more likely tactical nuclear weapons in the Ukraine conflict is real. My concern is one largely based in this article by Francesca Giovannini : A hurting stalemate? The risks of nuclear weapon use in the Ukraine crisis.
For Mr. Putin, any kind of losing the war with Ukraine seems like a non-option, given his domestic situation and the possibility that his regime could come to an end.
The article outlines three assumptions that those who don't think Russia will use nuclear weapons make: "that Russia has a strong interest in not destroying Ukraine, because Putin wants to occupy it; that even though Putin is a thug, he is not a crazy enough thug to break a taboo against the use of nuclear weapons in war, a taboo that has held for 75 years; and that there are plenty of other options that the Russians can exercise in subduing Ukraine. "
However, I think all three of these assumptions are suspect.
Regarding the first assumption, we have already seen Russia step up its attacks on civilian infrastructure in the last two days. For Putin, a victory of some kind (like regime change) is paramount, and the longer the invasion takes, with weapons and aid flowing to Ukrainian fighters, the harder that possibility becomes.
Regarding the second assumption taboo, there are reasons to suspect Putin is willing to break the nuclear taboo. He has already threatened the possibility of not renewing treaties with the US to limit the number of nuclear weapons, he has sidelined many of his advisors, and he is an authoritarian clinging on to power and trying the restore Russia's status as a superpower. He may see all options as being on the table.
Finally, with regard to the third assumption, it's true that Russia's conventional forces are vastly stronger than Ukraine's, but we have seen the fierce resistance of Ukrainian fighters, and a prolonged occupation would require many more troops. The use of a tactical nuclear weapon would test Ukraine's and NATO's resolve and signal his willingness to do whatever it takes to win the war.
Very interesting post, thanks for taking the time to write it up. It sounds like the BIS is moving in the right direction on regulating some sensitive biotech stuff. While it seems like there are some obvious things to add to this list (e.g. dangerous pathogens), it also seems like a lot of the dual-use stuff would be non-obvious to regulate this way given that many of the technologies that could be used to engineer a pathogen also have legitimate scientific and industrial applications.
I've included some of my shallow notes on this topic from investigating the Chinese bioeconomy.
For example, one area that has received particular scrutiny regarding import-export controls is w.r.t. surveillance technology used in China. In a 2018 letter to Secretary of Commerce Wilbur Ross inquiring about the sale of surveillance Technology to Chinese police, Senator Marco Rubio (R-FL) and U.S. Representative Chris Smith (R-NJ) wrote:
Recently, Human Rights Watch and other organizations have identified Thermo-Fisher Scientific, a Massachusetts based company, as selling DNA sequencers with advanced microprocessors under the Applied Biosystems (ABI) Genetic Analyzer brand to the Chinese Ministry of Public Security and its Public Security bureaus across China.
The letter goes on to request answers to the following questions:
1) Given that most crime control and detection and surveillance equipment, software and technology are controlled under the Export Administration Regulations, what factors are being used to determine the suitability of an export to an agent of state security? How did Thermo-Fisher surmount a presumption of denial to sell their product to the Chinese government?
2) What other product licenses have been sought under Export Administration Regulations sections 742.7, 742.13, 744.17(c), or other sections, to sell to agencies of China’s state security?
3) In light of recent reports, how are you—in coordination with the Department of State—reviewing the export of items being used by Chinese military and police end-users for surveillance, detection, and censorship, to determine whether more scrutiny is needed over the proliferation of “dual-use” information, software, and communication technologies? Are new legislation or new authorities needed to revisit/revise export control regulations so they are consistent with the rapid evolution of technology? Is software or technology which could be used for the purpose of domestic repression, subject to export controls with respect to Chinese end-users of concern?
4) In addition to possible export controls, is there any discussion currently underway to, at the very least, restrict the end-users of such technologies, in this case Xinjiang Public Security and related entities?
In a response letter, Secretary Wilbur stated that the rules did not apply to Thermo Fischer because
The items [gene sequencers] are low-technology products that are available from worldwide sources, including indigenous Chinese sources, and have numerous legitimate end-uses,”
including in education, medical research, and forensics, according to Mr. Ross’s letter.
This, I suspect, is the biggest challenge with an import-export. Where do you set the bar for technologies or products to join the export control list when it has applications in science or industry. Another externality here is that US biotech firms that sell products and services in China are likely to be hurt by too far-reaching an export control list (For more info on these considerations I recommend the report Two Worlds, Two Bioeconomies: The Impacts of Decoupling US–China Trade and Technology Transfer).
With regards to foreign investment, the U.S. has been pretty active in response to Chinese investment in biotech. The Committee on Foreign Investment in the United States (CFIUS) which sits under the Department of Treasury is the body in charge of reviewing transactions involving foreign investment in the United States that could involve national security concerns. The Foreign Investment Risk Review Modernization Act of 2018 (FIRMA) expanded CFIUS's purview to include subjecting even non-controlling foreign investments in companies with certain critical technologies or involved in sensitive data collection of US citizens. Under Firma, Chinese investment in U.S. biotech dropped sharply.
The article presents two biotech case studies where CFIUS has intervened:
After receiving $5 million in funding from the Department of Defense, San Francisco start-up Twist Bioscience, makers of synthetic DNA, decided to expand manufacturing through a Chinese subsidiary. This prompted an amendment to Congress’s annual defense policy bill to ensure grant recipients of the Pentagon’s Defense Advanced Research Projects Agency are prohibited from partnering with entities subject to foreign company or government control. The main concern cited was the threat of the Chinese government stealing intellectual property and trade secrets from American companies (O’Keefe, 2019).
In 2017, PatientsLikeMe, Cambridge-based online service that helps patients find people with similar health conditions, sold a majority stake to China’s iCarbonX. The goal was to combine the Chinese firm’s artificial intelligence technology with PatientsLikeMe’s customers and data sets. Currently, around 700,000 people use the website, which has generated tens of millions of data points about disease. CFIUS is now forcing a divestiture because the company collects potentially sensitive data on users who set up profiles which poses a danger to US national security. With this decision, PatientsLikeMe not only loses its principal financier, but also a critical technology partner (Farr, 2019).
Under its authority, CFIUS could and probably does try to prevent foreign actors from gaining bioweapon-enabling by means of technology transfer through foreign investment, but the same challenges around dual-use research are present.
Nice post. I would also add that Sam's podcast with Toby Ord discussed many EA-related concepts including the GWWC pledge. I signed up directly as a result of that podcast and I would expect that there may have been a similar spike as seen after the Will MacAskill episodes.
We've done a fairly thorough investigation into air sampling as an alternative to wastewater at the NAO. We currently have a preprint on the topic here and a much more in-depth draft we hope to publish soon.