CAF charges a fee for its services. This seems crucial to deciding between GAYE/Payroll Giving vs Gift Aid — from the intro email when I registered to do GAYE:
For direct CAF Give As You Earn donors, we take a 4% fee of your total donation to cover our costs (the fee will never be more than £10 per pay period).
Many employers pay this fee for their employees, and you should contact your payroll team to confirm if this is the case.
My employer doesn’t cover it so I’m looking for an alternative method.
The paper says:
Permissivism can take multiple forms. For instance, it might permit both fanatical and antifanatical preferences. Or it might permit (or even, its name notwithstanding, require) incomplete preferences that are neither fanatical nor anti-fanatical. But apart from noting its existence, we will say no more about the permissivist alternative for now, returning to it only in the concluding section.
The takeaway, I think, is that those who find fanaticism counterintuitive should favor not anti-fanaticism but permissivism. More specifically, they should favor a version of permissivism that permits incomplete preferences that are neither fanatical nor anti-fanatical.
Thanks for the helpful summary. I feel it's worth pointing out that these arguments (which seem strong!) defend only fanaticism per se, but not a stronger claim that is used or assumed when people argue for long-termism. The stronger claim being that we ought to follow Expected Value Maximization. It's a stronger ask in the sense that we're asked to take bets not of arbitrarily high payoffs, which can be 'gamed' to be high enough to be worth taking, but 'only' some specific astronomically high payoffs, which are derived from (as it were) empirically determined information, facts about the universe that ultimately give the payoff upper bounds. That said, it's helpful to have these arguments to show that 'longtermism depends on being fanatical' is not a knock-down argument against longtermism. Here's one example of that link being made: "...the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism" (Tarsney, 2019).
I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk - here's MacAskill talking about the hinge-of-history hypothesis (which is closely related to the 'time of perils' hypothesis):
Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away.
Seems a lot of it is saying “you can’t put a price on x” — and then going ahead and putting a price on x anyway by saying we should prefer to fund x over y.
Her conception of the good can include magnificence and meaning and abundance. But how can we make that available for everyone without the kinds of reasoning decried as ‘optimization’?
I feel like the people saying “you can’t put a price on a beautiful holy site” are trying to avoid saying “you can, and the holy site is worth more than the lives the money could have saved” - it’s not impossible that Notre Dame is worth the lives unsaved (with its millions of visitors a year), but it is impossible to refute the claim unless they are honest about how they’re valuing it.
It seems they’re missing the mood that our problems are larger than the resources we have to fix them, and so advocating for not facing the uncomfortable triage questions.
(My comments inspired by / plagiarised from https://x.com/trevposts/status/1865495961612542233 )