This is a linkpost for https://confusopoly.com/2019/04/03/the-optimizers-curse-wrong-way-reductions/.
Summary
I spent about two and a half years as a research analyst at GiveWell. For most of my time there, I was the point person on GiveWell’s main cost-effectiveness analyses. I’ve come to believe there are serious, underappreciated issues with the methods the effective altruism (EA) community at large uses to prioritize causes and programs. While effective altruists approach prioritization in a number of different ways, most approaches involve (a) roughly estimating the possible impacts funding opportunities could have and (b) assessing the probability that possible impacts will be realized if an opportunity is funded.
I discuss the phenomenon of the optimizer’s curse: when assessments of activities’ impacts are uncertain, engaging in the activities that look most promising will tend to have a smaller impact than anticipated. I argue that the optimizer’s curse should be extremely concerning when prioritizing among funding opportunities that involve substantial, poorly understood uncertainty. I further argue that proposed Bayesian approaches to avoiding the optimizer’s curse are often unrealistic. I maintain that it is a mistake to try and understand all uncertainty in terms of precise probability estimates.
I go into a lot more detail in the full post.
Well it does not change the ordering of options. You're kind of doing a wrong-way reduction here: you're taking the question of what project should I support and "reducing" it to literal quantitative estimation of effectiveness. Optimizer's curse only matters when comparing better-understood projects to worse-understood projects, but you are talking about "prioritizing among funding opportunities that involve substantial, poorly understood uncertainty".
We can specify a prior distribution.
Well no, but it's better if you do. That Deutsch quote seems to say that it could allow people to take bad reasons and overstate them; that sounds like a problem with thinking in general. And there is no reason to assume that probabilistic decision makers will overestimate as opposed to underestimate. There have been many times when I had a vague, scarce prejudice/suspicion based on personal ignorance, and deeper analysis of reliable sources showed that I was correct and underconfident. If you think your vague suspicions aren't useful, then just don't trust them! Every system of thinking is going to bottom out in "be rational, don't be irrational" at some point, so this is not a problem with probabilism in particular.
The reason it's better is that it allows better rigor and accuracy. For instance, look how this post revolves around the optimizer's curse. Here's a question: how are you going to adjust for the optimizer's curse if you don't use probability (implicitly or explicitly)? And if people weren't using probabilistic decision theory, no one would have discovered the optimizer's curse in the first place!
Hey! I didn't consent to being included in your post!!!
Here's what it means, formally: given that I have an equal desire to be right about the existence of God and the nonexistence of God, and given some basic assumptions about my money and my desire for money, I would make a bet with at most 50:1 odds that all-powerful-God exists.
But in Bayesian decision theory, they aren't on the same footing. They have very different levels of robustness. They are not well-grounded and this matters for how readily we update away from them. Is the notion of robustness inadequate for solving some problem here? In the Norton paper that you cite later on this point, I ctrl-F for "robust" and find nothing.
All of your suggestions make perfect sense under standard, Bayesian probability and decision theory. As stated, they are kind of platitudinous. Moreover, it's not clear to me that abandoning these principles in favor of a some deeper concept of ignorance actually helps motivate any of your recommendations. Why, exactly, is it important that I embrace model skepticism for instance - just because I have decided to abandon probabilities? Does abandoning probabilities reduce the variance in the usefulness of different models? It can't, actually, because without probabilities the variance is going to be undefined.
In practice, I haven't done things with multiple quantitative models because (a) models are tough to build, and (b) a good model accommodates all kinds of uncertainty anyway. It's never been the case where I've found some new information/ideas, decided to update my model, and then realized "uh oh, I can't do this in this model." I can always just add new calculations for the new considerations, and it becomes a bit kludgy but still seems more accurate. So yeah this is good in theory but the practical value seems very limited. To be sure, I haven't really tried it yet.
If we want to test the accuracy of a model, we need to test a statistically significant number of the things predicted by the model. It's not sufficient for us to donate to AMF, see that AMF seems to work pretty well (or not), and then judge Givewell accordingly. We need to see whether Givewell's ordering of multiple charities holds.
Testing works well in some contexts. In others it's just unrealistic.
Improving social capacity tends to work better when society is trusted to actually do the right thing.
But these are exactly the things that you are objecting to. Where do you think probability estimates of deeply uncertain things come from? If there's some disagreement here about the actual reliability of things like intuition and tradition, it hasn't been made explicit. Instead, you've just said that such things should not be expressed in the form of quantitative probabilities.
I interned for a VC, albeit a small and unknown one. Sure, they don't do Bayesian calculations, if you want to be really precise. But they make extensive use of quantitative estimates all the same. If anything, they are cruder than ... (read more)