The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it's still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.
However, even in the remote change that this particular pandemic didn't happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.
Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there's a good chance that the real chances are higher then that.
When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.
Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?
Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?
(Going entirely from Twitter etc and not having read the original papers or grant proposals myself)
I don't think what the WIV did was central to "gain-of-function" research, at least according to Marc Lipsitch. My understanding is that Shi Zhengli (Obviously not an unbiased source) from WIV claims that their work isn't gain-of-function because they were studying intermediate hosts, rather than deliberately trying to make pathogens more virulent or transmissible.*
My own opinion is that GoF has become ill-defined and quite political, especially these days, so we have to be really careful about precisely what we mean when we say "GoF"
I realize that this sound like splitting hairs, but the definitional limits are important, because Lipsitch's 2014 paper(s) about the dangers of GoF were predicated on a narrow definition/limits of GoF (the clearest-cut cases/worst offenders), while the claims about lab escape, if true, comes from a broader model of GoF.
(Two caveats
1) I want to be clear that I personally think that whether it's called GoF or not, studying transmission from intermediate hosts is likely a bad idea at current levels of lab safety.
2) I don't feel particularly qualified to judge this ).
*I wanted to find the source but couldn't after 3 minutes of digging. Sorry.