Ah that's why it's for draft amnesty week ;-) Somewhere inside this dense post there is a simpler one waiting to get out, but I figured this was worth posting. Right now it is in the form of ”my own calculations for myself” and it’s not that comprehensible nor the model of good transdisiplinary communication to which I aspire. I'm trying to collaborate with a colleague of mine to write that shorter version. (And to improve the app. Thanks for the bug report @Henry Stanley 🔸 !)
Yes, I sidestepped the details of relative valuation entirely here by collapsing the calculation of “impact” into “donation-equivalent dollars.” That move smuggles in multiple subjective factors — specifically, it incorporates a complex impact model and a private valuation of impacts. We’ll all have different “expected impacts,” insofar as anyone thinks in those terms, because we each have different models of what will happen in the counterfactual paths, not to mention differing valuations of those outcomes.
One major thing I took away from researching this is that I don’t think enough about substitutability when planning my career (“who else would do this?”), and I suppose part of that involves modelling comparative advantage. This holds even relative to my private risk/reward model. But thinking in these terms isn’t natural: my estimated impact in a cause area depends on how much difference I can make relative to others who might do it — which itself requires modelling the availability and willingness of others to do each thing.
Another broader philosophical question worth unpacking is whether these impact areas are actually fungible. I lean toward the view that expected value reasoning makes sense at the margins (ultimately, I have a constrained budget of labour and capital, and I must make a concrete decision about how to spend it — so if Bayes didn’t exist, I’d be forced to invent him). But I don’t think it is a given that we can take these values globally seriously, even within an individual. Perhaps animal welfare and AI safety involve fundamentally different moral systems and valuations?
Still, substitutability matters at the margins. If you move into AI safety instead of animal welfare, ideally that would enable someone else — with a better match to animal welfare concerns — to move into AI safety despite their own preferences. That isn’t EtG per se, but it could still represent a globally welfare-improving trade in the “impact labour market.”
If we take that metaphor seriously, though, I doubt the market is very efficient. Do we make these substitution trades as much as we should? The labour market is already weird; the substitutability of enthusiasm and passion is inscrutable; and the transactions are costly. Would it be interesting or useful to make it more efficient somehow? Would we benefit from better mechanisms to coordinate on doing good — something beyond the coarse, low-bandwidth signal of job boards? What might that look like?
I guess I should flag that I'm up for collaborations and this post can be edited on github including the code to generate the diagrams, so people should feel free to dive in and improve this