The COVID-19 pandemic likely was due to a lab leak in Wuhan. Currently, it's still up for public debate but likely the topic will be closed when the US intelligence community reports on their attempts to gather information of what happened at the Wuhan Institute of Virology between September and December of 2019 and suspicious activities in it around that time.
However, even in the remote change that this particular pandemic didn't happen as a downstream consequence of gain-of-function research, we had good reason to believe that the reason was extremely dangerous.
Marc Lipsitch, who should be on the radar of the EA community given that he spoke at EA Boston in 2017, wrote in 2014 about the risks of gain of function research:
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
If we make conservative assumption about with 20 full-time people working on gain of function research and take his lower bound of risk that gives us 1% chance of gain of function research causing a pandemic per year. These are conservative numbers and there's a good chance that the real chances are higher then that.
When in 2017 the US moratorium on gain of functions was lifted the big EA organization that promise their donors to act against Global Catastrophic Risk were asleep and didn't react. That's despite those organizations counting pandemics as a possible Global Catastrophic Risk.
Looking back it seems like this was easy mode, given that a person in the EA community had done the math. Why didn't the big EA organizations listen more?
Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn't also extremely flawed?
I realise the article excerpt you showed is not an accurate estimation. Marc and Thomas also say:
So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?
What did EA get wrong exactly? I guess they made rational decisions in a situation of extreme uncertainty.
Statistical estimation with little historical data is likely to be inaccurate. A virus leak did not turn into a pandemic before.
Furthermore, many accurate estimations are likely to lead to a bad outcome sometimes. If you throw 100 dice enough times, you will get all 1s eventually.
They do their best to gather data, predict events on the base of the data and give recommendations. However data is not perfect, models are not a perfect representation of reality, and recommendations are not necessarily unanimous. To err is human, and mistakes are possible, especially when the foundation of the applied processes contain errors.
Sometimes people just do not have enough information, and certainly nobody can gather information if data does not exist. Still a decis... (read more)