Meet your new EA elevator pitch.
I have provided a massive win for EA marketing, and if Big EA doesn't retroactively fund me for it I'm gonna quit donating and try to get my kidney back.
When people ask me about EA, or many other aspects of my worldview and behavior, I'm gonna say "I'm all about the three M's". To me, the three M's are the most resilient and useful load-bearing elements of our worldview. I expect discussions based on this elevator pitch to add more value faster to the average passersby than anything else in 80k or CEA introductory literature, and it's not even close.
Pardon my "we/our", my more heterodox friends.
Measurement
If you know almost nothing, almost anything will tell you something. - Douglas Hubbard
By default, we accept the risks (or even costs) of McNamara fallacies because we are overall uncompelled by the alternatives. In physical sciences, measuring instruments come with datasheets that bound the expected error, and may even suggest a probability distribution that you can expect your errors to form. We would prefer it if this aspect of datasheets was established or possible for every quantity we care about. When we elicit numbers from the world, we do not believe that we're compressing what matters well, but we aim to subject our expected errors to appropriate rigor.
Multiplication
That really sums up our product - Dustin Moskovitz
We believe that multiplication is a morally relevant and justified operation. At the end of the day, we do not think that multiplication is subject to the composition fallacy. We estimate the probability of a bednet preventing a counterfactually fatal case of malaria, and we look up the price of a bednet, and it is only by the joys of multiplication that we provide an estimate for the cost of saving a life.
This one's a little cheeky, because expected value theory relies just as much on addition as it does on multiplication.
Maximization
How can less be more? It's impossible. More is more. - Yngwie Malmsteen
By default, we prefer views that admit some notion of optimization, and are not overall deeply compelled by disciplines or traditions that emphasize failure modes or blindspots of optimization. We are vigilant about goodhart and we may stumble into the low-slack mistake class from time to time. But at the end of the day, we do not think these considerations are quite damning enough to support an all-things-considered argument against maximization.
Having informally pitched EA in many different ways to many different people, I've noticed that the strongest counter-reaction I tend to get is re: maximization (except when I talk to engineers). So nowadays I replace "maximize" with "more", and proactively talk about scenarios where maximization is perilous, where you can be mislead by modeling when being maximizing-oriented, etc. (Actually I tend to personalize my pitches, but that's not scalable of course.)
You say "I expect discussions based on this elevator pitch to add more value faster to the average passersby" which I interpret as meaning you haven't yet tried this pitch elsewhere, so I'd be curious to hear an update on how that goes. If it works, I might incorporate some elements :)
It's experience driven. I haven't done the full "it's a catchy alliteration!" on anybody really, but this is based on speed at arriving at cruxes. One at a time, also, is the move.
I think the intuition that goodhart (and other things) washes out all the value from the 3 Ms tends toward nihilism/defeatism or not trying at all if it's not challenged or corrected for, even though it's broadly correct/useful.
I like this. But even if I didn't appreciate the simplicity of it, I think I would applaud you for this line:
🤣