David M

Research Software Engineer @ Imperial College London
1964 karmaJoined Working (6-15 years)

Comments
219

Topic contributions
3

CAF charges a fee for its services. This seems crucial to deciding between GAYE/Payroll Giving vs Gift Aid — from the intro email when I registered to do GAYE:

For direct CAF Give As You Earn donors, we take a 4% fee of your total donation to cover our costs (the fee will never be more than £10 per pay period).

Many employers pay this fee for their employees, and you should contact your payroll team to confirm if this is the case.

My employer doesn’t cover it so I’m looking for an alternative method.

In 2022 I applied to the marketing department of 80000 Hours. After a compensated 2/3 day (I can’t remember) work test, which ultimately did not get me the job, I was offered a feedback call. I instead requested the feedback in an email and received detailed feedback.

The paper says:

Permissivism can take multiple forms. For instance, it might permit both fanatical and antifanatical preferences. Or it might permit (or even, its name notwithstanding, require) incomplete preferences that are neither fanatical nor anti-fanatical. But apart from noting its existence, we will say no more about the permissivist alternative for now, returning to it only in the concluding section.

 

The takeaway, I think, is that those who find fanaticism counterintuitive should favor not anti-fanaticism but permissivism. More specifically, they should favor a version of permissivism that permits incomplete preferences that are neither fanatical nor anti-fanatical.

Now I want to know what the hell permissivism is!

Thanks for the helpful summary. I feel it's worth pointing out that these arguments (which seem strong!) defend only fanaticism per se, but not a stronger claim that is used or assumed when people argue for long-termism. The stronger claim being that we ought to follow Expected Value Maximization. It's a stronger ask in the sense that we're asked to take bets not of arbitrarily high payoffs, which can be 'gamed' to be high enough to be worth taking, but 'only' some specific astronomically high payoffs, which are derived from (as it were) empirically determined information, facts about the universe that ultimately give the payoff upper bounds. That said, it's helpful to have these arguments to show that 'longtermism depends on being fanatical' is not a knock-down argument against longtermism. Here's one example of that link being made: "...the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism" (Tarsney, 2019).

Hi Ben, I'm curious if this public-facing report is out yet, and if not, where could someone reading this in the future look to check (so you don't have to repeatedly field the same question)?

> I appreciate the push to get a public-facing version of the report published - I'm on it!

I found your description of applying effort to a really difficult task, and eventually making the hard decision to cut your losses, inspiring and moving. Thank you to CEAP’s founders, funders, and other supporters.

I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk - here's MacAskill talking about the hinge-of-history hypothesis (which is closely related to the 'time of perils' hypothesis):

Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away.

If you're attending the Leaders Forum or are a 'key figure in EA', you're probably an EA, even if you don't admit it to yourself.

Load more