This is a short post on an idea from statistics and its relevance to EA. The initial post highlighted the fact that expectations cannot always be sensibly thought of as representative values from distributions.
Probability served three ways
Suppose an event is reported to have some probability . We’re all aware at this point that, in practice, that comes from some fitted model. Even if that means fitted inside someone’s head. This means it comes with uncertainty. However, it can be difficult to visualize what uncertainty in probability means.
Luckily, we can also model probabilities directly. A sample from the Beta distribution can be used as the parameter of a Bernoulli coin toss. The following three Beta distributions all have the same expectation: .

The interpretation here is:
- The probability is either very high or very low - we don’t know which.
- The probability is uniformly distributed - it could be anywhere.
- We’re fairly sure the probability is right in the middle.
Suppose we encounter in some discussion a point estimate of a probability. For example . Or perhaps, the idea of expectation might not even be stated explicitly - but no other uncertainty information is given. It is natural to wonder: which flavour of are we are talking about?
Implication for planning
Suppose a highly transmissible new disease infallibly kills some subset of humans. Or malevolent aliens. Or whatever is salient for the reader. Interpret the in our example as the probability an arbitrary human is in the affected group a year from now.
- Under distribution 1., and with roughly 32% probability, more than 99% of people are affected.
- Under distribution 3., and with roughly 47% probability, the proportion of the population affected is between 45% and 55%.
I’m going to baldly assert that knowing which distribution we face should alter our response to it. Despite the coincidence of expectations. Which distribution represents the worst x-risk? Which would it be easiest to persuade people to take action on?
Now that is a big philosophical question.
One answer is that there is no difference between 'orders' of random variables in Bayesian statistics. You've either observed something or you haven't. If you haven't, then you figure out what distribution the variable has.
The relationship between that distribution and the real world is a matter of your assiduousness to scientific method in constructing the model.
Lack of a reported distribution on a probability, e.g. p=0.42, isn't the same as a lack of one. It could be taken as the assertion that the distribution on the probability is a delta function at 0.42. Which is to say the reporter is claiming to be perfectly certain what the probability is.
There is no end to how meta we could go, but the utility of going one order up here is to see that it can actually flip our preferences.