The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it's still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.
However, even in the remote change that this particular pandemic didn't happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.
Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there's a good chance that the real chances are higher then that.
When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.
Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?
Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?
They do their best to gather data, predict events on the base of the data and give recommendations. However data is not perfect, models are not a perfect representation of reality, and recommendations are not necessarily unanimous. To err is human, and mistakes are possible, especially when the foundation of the applied processes contain errors.
Sometimes people just do not have enough information, and certainly nobody can gather information if data does not exist. Still a decision needs to be taken, at least between action vs inaction, and a data-supported expert guess is better than a random guess.
Given a choice, would you prefer nobody carried out the analysis with no possibility of improvement? or would you still let the experts do their job with a reasonable expectation that most of the times, the problems are solved and human conditions improve?
What if their decision had only 10% change of being better than a decision taken without carrying out any analysis? Would you seek expert advice to improve the odd of success, if that was your only option?