Thanks for the clarification. Now I understand your point. Speaking for myself, because it's easier, and I'm not consulting with my team...
I disagree that quantification always produces more reliable judgments and think it's context dependent, but that's a very fundamental disagreement which perhaps we can discuss in person one day. I'll just say that in this particular case, we are very transparent about our systematic approach to collecting and aggregating  the data we have to draw conclusions (incorporating probability and impact estimates) and will be providing all of the data for others to scrutinize. As I said in our post, we are also happy to present it right away in webinar form, so that others can draw their own conclusions or produce quantitative aggregations of them if they'd like.Â
And in fact our data -- on the harm and containment of threats and the effectiveness and practicality of tactical response -- has and will continue to inform the BOTECs that others in the EA space are already doing (in many cases many are currently doing this without the underlying evidence, because it takes time to conceptualize, sift through, and digest). And even without quantifying, it can directly help grantmakers or practitioners make decisions about whether to invest in a specific tactic to counter a specific threat, which is a decision those folks have to make every day. For example, say a statewide organization is anticipating a specific threat, like security forces at the polls and is trying to organize a program in response. Our table and analysis can help them understand what tactics exist to counter those threats and whether any of them are likely more effective than others.
Perhaps a part of the misunderstanding can be boiled down to a title issue. I shouldn't have used a superlative. I should have just said: "what are high-leverage tactics," which is actually what I meant.
Thanks for engaging with this and taking the time to share your questions and concerns!
I think there’s a foundational point worth clarifying first. We actually think, based on extensive engagement and explicit discussions about this in the pro-democracy space over the last years, that the central gap in the space right now is empirical data. We’re missing basic evidence about what threats have actually materialized, what their measurable impacts have been, and which interventions have been studied rigorously. Without that groundwork, any probability estimates are going to be largely guesswork. That’s why we started here. We wanted to systematically gather what we know about where these threats have occurred (in the US and internationally) what their documented effects have been, and where there’s evidence that specific tactics actually work. That aggregated evidence (both qualitative and quantitative) is what’s been missing. Once we have a clearer picture of that, then it becomes much more meaningful to layer in probability estimation, such that it is built on a foundation of actual evidence about threats and interventions.
Regarding your unclarities
On what we’re optimizing for: we’re not optimizing in the technical sense, we’re filtering. We’re asking: does the evidence suggest any of these threats are less severe than they appear? and which tactics have the strongest evidence base and are implementable before 2026? That’s designed to quickly narrow from a large universe of possibilities to the most consequential ones.Â
On how we aggregate evidence: for threats, we use a specific rubric -- demonstrable impact at meaningful scale, escalating, or uncontained trajectory, and no strong institutional safeguards. For tactics, we’re transparent about the qualitative nature. We list which filters each tactic passes through and describe how promising it looks based on the evidence, rather than aggregating across different evidence types into a single score (giving a false sense of precision about our conclusions).
On what motivates the criteria: the criteria exist to help us quickly filter down to what matters most. We’re being explicit about our choices, and we also want to eventually make this user-friendly so people can adjust the criteria based on their own priorities -- theirs may not be the same as ours. And to be clear, this is not a filter of ours: "whether or not something happened over the last 30 years is clearly not a great filter". We simply search for similar events happening in the US during the last 30 years, so that we have data on what the impact of the events could be on voter participation, election certification, and/or peaceful transfer of power.
And as we say in the limitations, we think that these criteria make sense 5 months out from the election, but it does involve intuition and qualitative analysis. When we share the full document, others might see the same data and draw different conclusions.
Comparison to GCR
On the broader methodological concern about checklist approaches and tail risk: we're not sure how apt the GCR analogy is. Most of the threats we’re assessing (voter intimidation, disinformation, certification challenges) have real-world precedent either in the US or in other countries that have experienced democratic backsliding. The authoritarian playbook being employed here isn’t an unknown and has been documented in other contexts and its mechanisms are reasonably well understood. That doesn’t mean we’ll catch everything, but it does mean we’re not operating in the same kind of deep uncertainty where tail risk considerations become essential.
And relatedly (though maybe not super important to discuss), we’re puzzled by the claim that a step-by-step approach can’t produce reliable recommendations in this context. We don’t understand the specific failure mode being identified. Could you give a concrete example of how our specific framework (assessing historical precedent, current trajectory, institutional safeguards, and evidence for counter-tactics) would lead us to a wrong recommendation? That would help us understand the concern much better and improve our methodology in the next phase of work.
Hi Emmanuel, thanks for asking! For the most part that fund will move money outside the US due to legal restrictions. At the moment, there is not a single fund that smaller donors (correct me if that's an incorrect assumption about you!) can donate to that will make donations per our recommendations. I know there are different attempts to set that type of fund/democracy-focused effective giving org up, but as far as I know nothing exists yet. If you send me a DM, I will try to ping you if I get any updates.