Given 2 distributions and which are independent estimates of the distribution , this be estimated with the inverse-variance method from:
- .
Under which conditions is this a good aproach? For example, for which types of distributions? These questions might be relevant for determining:
- A posterior distribution based on distributions for the prior and estimate.
- A distribution which combines estimates of different theories.
Some notes:
- The inverse-variance method minimises the variance of a weighted mean of and .
- Calculating and according to the above formula would result in a mean and variance equal to those derived in this analysis from Dario Amodei, which explains how to combine and following a Bayesian approach if these follow normal distributions.
If you assume both X1 and X2 are normal then the only difference between them comes from their moments, so you can use the inverse variance formula. But that leans directly on the formula for the product of normal distributions. The formula for a general convolution of two distributions does not have such a clean form. So while I don't have a rigorous argument for this, I would be shocked if you could do the same for any two PDFs X1 and X2 with no change to the formula.
I do not know if this is really necessary for the uses you name, though. Bayes Rule determines the posterior distribution regardless of whether it follows an inverse variance formula or not.
I will try to illustrate what I mean with an example:
Meanwhile, I have realised the inverse-variance method minimises the variance of a weighted mean of X1 and X2 (and have updated the question above to r... (read more)