https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and
Linkpost from LessWrong.
The claims from the piece which I most agree with are:
- Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.
- Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.
I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.
Thorstad is mostly writing about X-risk from bioterror. That's slightly different from biorisk as a broader category. I suspect Thorstad is also skeptical about the latter, but that is not what the blogposts are mostly focused on. It could be that frontier AI models will make bioterror easier and this could kill a large number of people in a bad pandemic, even if X-risk from bioterror remains tiny.