Hide table of contents

I write this post as a work-in-process guide for thinking about different kinds of uncertainty in the context of making a cost-effectiveness analysis (CEA) model. I'm publishing this as-is to check if it is generally understood and if it seems important and relevant, and if there'd be interest I'll write a more complete and polished post on the topic. 

When estimating a variable, we have both "epistemic" and "statistical" (or, "aleatoric") uncertainty about its value. Statistical uncertainty can be thought of as the inherent randomness involved (e.g. the number of heads in 10 coin flips). Epistemic uncertainty, however, is due to our lack of knowledge about it (e.g. how many coin flips in total people did yesterday?).

For example, in a randomized controlled trial (RCT) we try to estimate the effect of a treatment on some outcome variable. That effect is usually dependent on many particular factors, say: the financial and cultural aspects of the population involved, the time of day the treatment was administered, the prevalence of a specific disease in a particular location, etc. With unlimited knowledge, we could account for all of these factors and estimate the effect as a function of them. However, as this isn't possible, we can instead try to model the effect as a random variable with some statistical uncertainty.

If we conduct that RCT well, have a large sample size, and draw uniformly from our target population, then we can find a good fit for the distribution of the effect which can then be used to predict the effect of a future large-scale program over the same population. In this case, we have very little epistemic uncertainty about the effect, but the statistical uncertainty is still present and irreducible.

If we tried to apply the results of that RCT to a different population, then we would have to account for the epistemic uncertainty as well. For example, say we conducted the RCT in a rural area of a developing country, and we wanted to apply the results to an urban area of that same country, then we would have to make some educated guesses about the effect of the treatment in the new population. This is in epistemic uncertainty territory.

Relevance to CEAs

The way Guesstimate, Squiggle, Dagger and similar programs work is called a Monte Carlo simulation and involves running the computation many times, where each input variable is sampled randomly according to the given distribution. This gives us many samples for each variable in our computation, which we can think of as approximating its distribution. Now, we care about these distributions and not just their expected value because that is a way to quantify and show the uncertainty involved in the computation. That uncertainty we want to express is generally the epistemic uncertainty about the expected cost-effectiveness. 

Examples:

  • If we think charities generally have a 60% chance of success, it’s a  type of statistical uncertainty. We wouldn’t want to model that as a Bernoulli distribution but rather use that 60% directly as a constant multiplicative factor. Or better yet, use a beta distribution instead of a constant to express to what extent we are not sure about the exact chance of success. 
  • Say our proposed charity to promote regulating hat sizes in summer is only cost-effective if there are no existing regulations in place, but we haven’t yet put the time to check on that. In this case, we are epistemically uncertain and we should use a Bernoulli distribution.
  • We find a paper that has conducted an RCT or a meta-analysis on an intervention of interest and found an average effect size of 2.3 with 90%CI of [1.8, 2.9]. When we use that variable in our calculation, what should our uncertainty estimate be?
    • First, while the effect size itself is definitely a random variable that’s statistically uncertain, we can think of its average effect as an epistemically unknown constant. So in our model, we should generally use the average effect rather than model how the effect is distributed across individuals.
    • Second, and a bit of a tangent, the definition of a confidence interval in frequentist statistics is weird. In such a setting, we assume that there is a true number which is the actual average effect size, and then we have some process to estimate it from random samples drawn from some actual random process, but we don’t know the parameters at all. The confidence interval is computed from these samples by some process, so that if we run this process again and again, over independent sets of samples, we get many intervals roughly 90% of them would include the actual parameter.
      • This is different from how we usually think of our credible intervals - as expressing the belief that the true result has a 90% chance of being inside our interval. 
      • I’m not yet sure how to best handle this, but it seems like in many cases you can expect the authors to have come up with a similar looking credible interval. 
    • Third, this doesn’t take into account external validity (and only partially addresses internal validity). I’m not yet sure about the best approach for representing external/internal validity in the model as epistemic uncertainty. 
  • Generally, we have epistemic uncertainty around summary statistics, which themselves are aggregations of statistical uncertainty.

Relevant links

  • Wikipedia page
  • Paper on the topic in the context of ERAs and bayesian networks - link.
  • Examinations on modeling uncertainty in CEAs that are interesting and have links to more fun stuff 1, 2, 3, 4.

27

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I think that such work on fundamental tools is very important for improving the EA toolkit - thank you Edo!

Executive summary: The post discusses different types of uncertainty in cost-effectiveness analyses (CEAs), specifically epistemic uncertainty due to lack of knowledge vs. statistical uncertainty inherent to the data.

Key points:

  1. Epistemic uncertainty arises when generalizing results to new contexts, while statistical uncertainty is irreducible randomness.
  2. Monte Carlo simulation is used in CEAs to quantify uncertainty by sampling input variables.
  3. Confidence intervals express statistical uncertainty, while credible intervals represent epistemic uncertainty.
  4. Care is needed when using summary statistics like effect sizes in models, as they contain both types of uncertainty.
  5. Modeling external validity as epistemic uncertainty is an open challenge.
  6. Overall, epistemic uncertainty around averages aggregates statistical uncertainty within data.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities