(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I'm also a proud GWWC pledger and vegan.
I basically agree with this (and might put the threshold higher than $100, probably much higher for people actively pursuing policy careers), with the following common exceptions:
It seems pretty low-cost to donate to a candidate from Party X if...
I don't know the weeds of the moral parliament view, but my suspicion is that this argument relies on too low of a level of ethical views (that is, "not meta enough"). That's still just a utilitarian frame with empirical uncertainty. The kind of "credences on different moral views" I have in mind is more like:
I want my moral actions to be guided by some mix of like, 25% bullet-biting utilitarianism (in which case, insects are super important in expectation), 25% virtue ethics (in which case they're a small consideration -- you don't want to go out of your way to hurt them, but you're not obligated to do much in particular, and you should be way more focused on people or other animals who you have relationships with and obligations towards), 15% some kind of "stewardship of humanity" (where you maybe just want to avoid actively being a monster but should be focused elsewhere), 10% libertarianism (where it's quite unclear how you'd treat insects), and 25% spread across other views, which mostly just points towards not being super-fanatical about any of the others. So something like 30% of me thinks insect suffering is a big deal, which is enough for me to take it seriously but not enough for me to drop the stuff that more like 75% of me thinks is a big deal; in other words I think it's moderately important.
I don't know what my actual numbers are, and I'm not sure each of these views is really what the respective philosophy would say about insect welfare; I'm just saying, it's easy in this kind of framework to wind up having lots of moderate priorities that each seem extremely important on certain ethical views.
I think it's reasonable to say "I put some credence on moral views that imply insect suffering is very important and some credence on moral views that imply it's not important; all things considered, I think it's moderately important."
A couple other comments are gesturing at this, but this logic could be applied to all kinds of things: existential risk is probably "either" extremely important or not at all important if you plug different empirical and ethical views into a formula and trust the answer; likewise present-day global health, or political polarization, or developed-world mental health, etc. Eventually, you can either (1) go all in on a particular ethical and meta-ethical theory, (2) be inconsistent, or (3) combine all these considerations into a balanced whole, in which probably a lot of things that pencil as "extremely important" in some views wind up being a moderately high priority. I don't think it's obvious that (3) is right, but this post does not make an argument that (1) is right, and I think the burden of proof is on the side arguing explicitly against moderation and intuitive conclusions.
One reason to think (3) is right is to look at the track records. You say you "cannot be a moderate Christian." I don't think religious fundamentalists have morally outperformed religious moderates. There are lots of people who take religious values seriously but not fanatically; some of the leaders of the world's greatest social movements used a lot of religious thinking and rhetoric without trying to follow every letter of the Bible.
I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren't statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I'm not sure -- I really think the timeline models have been useful for our community's strategy and for informing other audiences).
But also, they only sometimes create a sense of panic; I could see specificity being helpful for people getting out of the mode of "it's vaguely inevitable, nothing to be done, just gotta hope it all works out." (Notably the timeline models sometimes imply longer timelines than the vibes coming out of the AI companies and Bay Area house parties.)
There's a grain that I agree with here, which is that people excessively plan around a median year for AGI rather than a distribution for various events, and that planning around that kind of distribution leads to more robust and high-expected-value actions (and perhaps less angst).
However, I strongly disagree with the idea that we already know "what we need." Off the top of my head, several ways narrowing the error bars on timelines -- which I'll operationalize as "the distribution of the most important decisions with respect to building transformative AI" -- would be incredibly useful:
I also strongly disagree with the framing that the important thing is us knowing what we know. Yes, people who have been immersed in AI content for years often believe that very scary and/or awesome AI capabilities are coming within the decade. But most people, including most of the people who might take the most important actions, are not in this category and do not share this view (or at least don't seem to have internalized it). Work that provides an empirical grounding for AI forecasts has already been very useful in bringing attention to AGI and its risks from a broader set of people, including in governments, who would otherwise be focused on any one of the million other problems in the world.
Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)
In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):
In favor of giving later:
Are you a US resident who spends a lot of money on rideshares + food delivery/pickup? If so, consider the following:
I think the opposite might be true: when you apply it to broad areas, you're likely to mistake low neglectedness for a signal of low tractability, and you should just look at "are there good opportunities at current margins." When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
- Would it be good to solve problem P?
- Can I solve P?
What is gained by adding the third thing? If the answer to #2 is "yes," then why does it matter if the answer to #3 is "a lot," and likewise in the opposite case, where the answers are "no" and "very few"?
Edit: actually yeah the "will someone else" point seems quite relevant.
Having a savings target seems important. (Not financial advice.)
I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do find yourself in such a situation; you might find it harder than you expect.
(Personal digression: I also notice my own brain paying a lot more attention to my personal finances than I think is justified. Maybe some of this traces back to some kind of trauma response to being unemployed for a very stressful ~6 months after graduating: I just always could be a little more financially secure. A couple weeks ago, while meditating, it occurred to me that my brain is probably reacting to not knowing how I'm doing relative to my goal, because 1) I didn't actually know what my goal is, and 2) I didn't really have a sense of what I was spending each month. In IFS terms, I think the "social and physical security" part of my brain wasn't trusting that the rest of my brain was competently handling the situation.)
So, I think people in general would benefit from having an explicit target: once I have X in savings, I can feel financially secure. This probably means explicitly tracking your expenses, both now and in a "making some reasonable, not-that-painful cuts" budget, and gaming out the most likely scenarios where you'd need to use a large amount of your savings, beyond the classic 3 or 6 months of expenses in an emergency fund. For people motivated by EA principles, the most likely scenarios might be for impact reasons: maybe you take a public-sector job that pays half your current salary for three years, or maybe you'd need to self-fund a new project for a year; how much would it cost to maintain your current level of spending, or a not-that-painful budget-cut version? Then you could target that amount (in addition to the emergency fund, so you'd still have that at the end of the period); once you have that, you could feel more secure/spend less brain space on money, donate more of your income, and be ready to jump on a high-impact, low-paying opportunity.
Of course, you can more easily hit that target if you can bring down your expenses -- you both lower the required amount in savings and you save more each month. So, maybe some readers would also benefit from cutting back a bit, though I think most EAs are pretty thrifty already.
(This is hardly novel -- Ben Todd was publishing related stuff on 80k in 2015. But I guess I had to rediscover it, so posting here in case anyone else could use the refresher.)