Sometimes people seek to offset their harmful behaviours. Counterfactual impact of donations is often used in offsetting calculations. This seems mistaken.
Assume the following situation:
1 dollar donation for an animal product reduction charity results in 1 animal spared from being born into factory farming.
Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.
If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.
A better but still unsatisfactory approach would be looking at Shapley values. Here is a case where Shapley value is still unsatisfactory:
2 people cooperate on a project to spare one animal from being born. Participation of only one person is sufficient for the project to succeed. The counterfactual value of each participant is 0, Shapley value of each is 0.5.
Maybe min(Shapley, Counterfactual) would be a better benchmark for offsetting. But I’m not sure of this.
How much difference does this make?
Many effective charities tend to do institutional work. Institutional work often involves a lot people. In animal advocacy, welfare policies require mass support from the public. A petition easily gets more than 0.1 million signatures. 7.5 million people voted for Prop 12 in California.
However, the specific supporters from the public are less critical compared to the donor. Many projects wouldn’t start at all without donor support, whereas Prop 12 would still pass even if one fewer person voted for it.
Nonetheless, there are quite a lot of veto-players involved in institutional animal welfare work. Assuming that there are 8 distinct individuals/coalitions that have power to kill a typical animal welfare project, Shapley value might be an order of magnitude lower than the counterfactual value.
I am wondering if assigning "moral credit" for offset purposes is too complex to do with an algorithm and instead requires context-specific application of judgment. A few possible examples:
Motivated reasoning is always a risk, and any moral-credit granting analysis is more likely to be underinclusive (and thus over-grant available moral credit to influences that were identified) than the reverse. In some or even many cases, it may be necessary to apply an upward adjustment on even min(counterfactual value, Shapley value) to account for these factors.
Thanks for this comment, it felt awkward to include all veto-players in Shapley value calculation while writing the post, now I'm able to see why. For offsetting we're interested in making every single individual weakly better off in expectation compared to the counterfactual where you don't exist/don't move your body etc. so that no one can complain about your existence. So instances of doing harm can only be offset by doing good. Meanwhile, Shapley doesn't distinguish between doing/allowing, therefore it assigns credit to everyone who could have prevented an outcome even if they haven't done any good.