Thanks for writing this! I think this is a good proposal worth seriously considering and engaging with (although I'm undecided on whether I'd endorse it all-things-considered).
One other consideration is that you may want to effectively negate the benefits obtained through fraud to prove that "crime doesn't pay", and this means decreasing budgets for the causes by the amounts granted/donated to these causes, by cause. Since FTX and associates were disproportionately funding longtermist interventions and presumably value them more than other causes, if we don't pay out disproportionately from our longtermist budget, FTX and associates get away with shifting the share and amount of funding towards longtermism, which is still a win to them. (Of course, this doesn't necessarily mean the share of longtermist funding from outside FTX should decrease overall, since there are also reasons to increase it now.)
Another consideration is that we should pay more than the benefits, to account for fraud that isn't caught.
I think it would depend. For many charities, the ultimate cost of this sort of "strict liability" policy is borne by the intended beneficiaries. I would be hesitant to extend in certain cases beyond what I think morality requires.
For a grad student receiving a micro grant, asking to return funds already earned is too much, and expecting significant vetting is unrealistic and inefficient.
The potential value, I think, would be for midsize+ organizations with a diverse donor base. They could put e.g. 5% of each year's donation into a buffer, releasing 1% each year to programs as time passed without any red flags.
Very few nonprofits could absorb a reversal of a megadonor's gifts.
I should maybe have been more explicit in stating the actual policy proposal:
I don't think paying back necessarily needs to be done on the level of an individual project/grant. Insofar as the EA community is, well, a community, it might be viable to take responsibility on the level of the community.
For instance, in the discussion I linked to on twitter, the suggestion was that EAs would set up a fund that they could donate to for the victims of FTX.
This would presumably still create lots of community-wide incentives, as well as incentives among the leaders of EA, because nobody wants their community to waste a lot of resources due to having worked with bad actors. But it would also be much less burdensome to individual granttakers.
I am not so sure if this is a great suggestion. Traditional institutions like banks and government approving entities for non-profits will be able to filter some of the nuisance, best cure the issue before donations or acceptance of funds will be made. Assuring that one is without issues, conflicts of interests or have committed a crime kind of solves 90% of the chances that someone will try to do a robin hood style of philanthropy. Just my thoughts on how to tackle the whatif situation.
Hm, my understanding is that there is no traditional institution that will issue a "yep this person is good" document that works across contexts, including for e.g. people who work in crypto, so any approval process would require a lot of personal judgement?
That said I don't disagree with the notion of using preexisting approval systems like crime record, my suggestion is more for making sure that one does in fact use them in the correct proportions, and in particular credibly committing to doing so in the future.
I agree with this policy I think. The idea of internalising externalities feels very neat/elegant, and I think it creates the right incentives for all involved parties.
Thanks for writing this! I think this is a good proposal worth seriously considering and engaging with (although I'm undecided on whether I'd endorse it all-things-considered).
One other consideration is that you may want to effectively negate the benefits obtained through fraud to prove that "crime doesn't pay", and this means decreasing budgets for the causes by the amounts granted/donated to these causes, by cause. Since FTX and associates were disproportionately funding longtermist interventions and presumably value them more than other causes, if we don't pay out disproportionately from our longtermist budget, FTX and associates get away with shifting the share and amount of funding towards longtermism, which is still a win to them. (Of course, this doesn't necessarily mean the share of longtermist funding from outside FTX should decrease overall, since there are also reasons to increase it now.)
Another consideration is that we should pay more than the benefits, to account for fraud that isn't caught.
Both good points.
I think it would depend. For many charities, the ultimate cost of this sort of "strict liability" policy is borne by the intended beneficiaries. I would be hesitant to extend in certain cases beyond what I think morality requires.
For a grad student receiving a micro grant, asking to return funds already earned is too much, and expecting significant vetting is unrealistic and inefficient.
The potential value, I think, would be for midsize+ organizations with a diverse donor base. They could put e.g. 5% of each year's donation into a buffer, releasing 1% each year to programs as time passed without any red flags.
Very few nonprofits could absorb a reversal of a megadonor's gifts.
I should maybe have been more explicit in stating the actual policy proposal:
I don't think paying back necessarily needs to be done on the level of an individual project/grant. Insofar as the EA community is, well, a community, it might be viable to take responsibility on the level of the community.
For instance, in the discussion I linked to on twitter, the suggestion was that EAs would set up a fund that they could donate to for the victims of FTX.
This would presumably still create lots of community-wide incentives, as well as incentives among the leaders of EA, because nobody wants their community to waste a lot of resources due to having worked with bad actors. But it would also be much less burdensome to individual granttakers.
Hello there,
I am not so sure if this is a great suggestion. Traditional institutions like banks and government approving entities for non-profits will be able to filter some of the nuisance, best cure the issue before donations or acceptance of funds will be made. Assuring that one is without issues, conflicts of interests or have committed a crime kind of solves 90% of the chances that someone will try to do a robin hood style of philanthropy. Just my thoughts on how to tackle the whatif situation.
All the best,
Miguel
Hm, my understanding is that there is no traditional institution that will issue a "yep this person is good" document that works across contexts, including for e.g. people who work in crypto, so any approval process would require a lot of personal judgement?
That said I don't disagree with the notion of using preexisting approval systems like crime record, my suggestion is more for making sure that one does in fact use them in the correct proportions, and in particular credibly committing to doing so in the future.
Hey, it's me.
I agree with this policy I think. The idea of internalising externalities feels very neat/elegant, and I think it creates the right incentives for all involved parties.