https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and
Linkpost from LessWrong.
The claims from the piece which I most agree with are:
- Academic research does not show strong evidence that existing LLMs increase bioterrorism risk.
- Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence.
I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute.
I am not sure why you receive downvotes on this post - I also think that anything that is made strong claims about and that has large impacts (possibly a significant reason for the UK and US' movements on AI policy is the perceived AI+bio risk) should also be backed up by evidence. Perhaps we just have not had time to conduct these studies and if so I think it is fair that strong statements have been used on AI+bio in order to make potential risks salient. But as we get more and more traction with AI policy and societal awareness I think we need to go back and revisit these assumptions. The only "evidence" I have found so far are some less-than-reliable interpretations of Metaculus results on the overlap of AI and bio.