Given 2 distributions and which are independent estimates of the distribution , this be estimated with the inverse-variance method from:
- .
Under which conditions is this a good aproach? For example, for which types of distributions? These questions might be relevant for determining:
- A posterior distribution based on distributions for the prior and estimate.
- A distribution which combines estimates of different theories.
Some notes:
- The inverse-variance method minimises the variance of a weighted mean of and .
- Calculating and according to the above formula would result in a mean and variance equal to those derived in this analysis from Dario Amodei, which explains how to combine and following a Bayesian approach if these follow normal distributions.
I was thinking about cases in which X1 and X2 are non-linear functions of arrays of Monte Carlo samples generated from distributions of different types (e.g. loguniform and lognormal). To calculate E(X1), I can simply compute the mean of the elements of X1. I was looking for a similar simple formula to combine X1 and X2, without having to work with the original distributions used to compute X1 and X2.
A concrete simple example would be combining the following: