https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and
Linkpost from LessWrong.
The claims from the piece which I most agree with are:
- Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.
- Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.
I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.
Disclaimer that I am practically a layman on this topic.
My threat model is that creating bioweapons requires a series of steps that are getting easier and easier to do, and LLMs are significantly accelerating one of these steps.
In that sense, open-sourcing LLMs does contribute to increased biorisk, but the action of restricting open-source LLMs to restrict the increase in biorisk seems a disproportionate response by itself?
For example, the internet certainly increased the ease of conducting terrorism, but many people would consider it a disproportionate response to heavily restrict the internet just to restrict terrorism.