Some people have proposed various COVID-19-related questions (or solicited collections of such questions) that I think would help inform EAs’ efforts and prioritisation both during and after the current pandemic. In particular, I've seen the following posts: 1, 2, 3, 4.
Here I wish to solicit a broader category of questions: Any questions which it would be valuable for someone to research, or at least theorise about, that the current pandemic in some way “opens up” or will provide new evidence about, and that could inform EAs’ future efforts and priorities. These are not necessarily questions for how to help with COVID-19 specifically, and some may inform EA efforts even outside of the broad space of existential risks. I’ve provided several examples to get things started.
I'd guess that most of these questions are probably best addressed at least a few months from now, partly because then there will be more and clearer evidence. But we could start now with collecting the questions and thinking of how we could later investigate them.
If you have ideas on how to refine or investigate some of the questions here, have ideas for spin-off or additional related questions, or already have some tentative “answers”, please provide those as comments.
(I'd consider the 4 posts linked to above to also count as good examples of the sort of question I’m after.)
What lessons can be drawn from these events for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?
E.g., I think these events should probably update me somewhat further towards:
But I'm still wary of extreme versions of those conclusions. And I also worry about something like a "stopped clock is right twice a day" situation - perhaps this was something like a "fluke", and "early warnings" from the EA/rationalist community would typically not turn out to seem so prescient.
(I believe there’s been a decent amount discussion of this sort of thing on LessWrong.)