https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and
Linkpost from LessWrong.
The claims from the piece which I most agree with are:
- Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.
- Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.
I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.
Kevin Esvelt's team has released this paper earlier this month: