I think it's very valuable to post this like you did, for you probably but also for others (like me) to see what kind of methodological traps and limitations arise in this kind of effort. Thanks!
You mentioned this "big question": How effective is the GHSI at actually driving change in the biosecurity field?
The GHSI is a really nice tool created by big orgs, so I think it's necessary and useful to break this down into subquestions like you did. IMO it's also good, while researcher these questions (who engages with the index, and how? Is this legible and actionable?) further, to take a step back to consider the limits of the whole approach, like the fact that analyzing at national-level raises some issues, given how international dynamics emerge during such times of crisis (Moeen mentioned this in the GDoc's comments).
Your comments here and there make sense to me. I feel like it's quite straightforward in theory, and harder to do in practice.
I do observe that some orgs are leagues above others in communicating, and I feel like the two important reasons for this are
- the org's willingness to allocate resources to professional communication work
- the extent to which the org's activity lends itself to communication (eg most orgs working with cute animals have an advantage here).
Extremely interesting article -and I'd love to see other posts exploring your assumptions!
I had a chance to meet a private foundation's leader in Europe recently (raising and donating several millions / year). Interestingly, they also mentioned TBP and I'm now wondering whether it was to somehow position themselves as opposed to some sort of highly demanding grantmaking.
I do think TBP and EA are compatible, to some degree. We should not confuse (1) "having a very high bar for anticipated effectiveness" and (2) "having a very high bar for evidence of impact". It is quite simple to apply for a grant from most EA grantmakers. In my (certainly limited) experience, if you want your grant to be renewed (and, supposedly, increased), you'll probably have to provide significant evidence, and I think it's fair enough.
I suppose non-EA funders might:
- Have actually little knowledge of EA or the EA funding landscape
- Be discouraged by the depth of analysis that they can see from GiveWell
- Be annoyed or discouraged by EA's frequent, strong claim of "making decisions based on evidence" (btw, this claim is so often advertized that I'd assume that it can be conflated with a reliance on frequent reports from and control over grantees).
Also, maybe it's be worth distinguishing different cases, in particular:
The context might vary and make me reconsider in certain instances, but I generally think it's important to say that there are ways to act that are orders of magnitude more effective than others. So yes, insist on "more" rather than on "the most possible"... But with an emphasis on the fact that there are resources to help you and guide you towards options that are likely to be immensely more impactful than most actions.