As noted by Sam Nolan here:
GiveWell's cost-effectiveness analyses (CEAs) of top charities are often considered the gold standard. However, they still have room for improvement. One such improvement is the quantification of uncertainty.
Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities?
Not answering the question, but I would like to quickly mention a few of the benefits of having confidence/credible intervals or otherwise quantifying uncertainty. All of these comments are fairly general, and are not specific criticisms of GiveWell's work.
In direct response to Hazelfire's comment, I think that even if the uncertainty spans only one order of magnitude (he mentioned 2-3, which seems reasonable to me), this could have a really larger effect on resource allocation. The bar for funding is currently 8x relative to GiveDirectly IIRC, which is one order of magnitude, so gaining a better understanding of the certainty could be really important. For instance, we could learn that some interventions which are currently above the bar, are not very clearly so, whereas other interventions which seem to be under the bar but very close to it, could turn out to be fairly certain and thus perhaps a very safe bet.
I think that all of these effects could have a large influence on GiveWell's recommendations and donors choices, future research, and directly on getting more accurate point-estimates (which could potentially be fairly big).
Thanks for the feedback!
I do think further quantifying the uncertainty would be valuable. That being said, for GiveWell's top charities, it seems that including/studying factors which are currently not being modelled is more important than quantifying the uncertainty of the factors which are already being modelled. For example, I think the effect on population size remains largely understudied.