(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I'm also a proud GWWC pledger and vegan.
I don't know the weeds of the moral parliament view, but my suspicion is that this argument relies on too low of a level of ethical views (that is, "not meta enough"). That's still just a utilitarian frame with empirical uncertainty. The kind of "credences on different moral views" I have in mind is more like:
I want my moral actions to be guided by some mix of like, 25% bullet-biting utilitarianism (in which case, insects are super important in expectation), 25% virtue ethics (in which case they're a small consideration -- you don't want to go out of your way to hurt them, but you're not obligated to do much in particular, and you should be way more focused on people or other animals who you have relationships with and obligations towards), 15% some kind of "stewardship of humanity" (where you maybe just want to avoid actively being a monster but should be focused elsewhere), 10% libertarianism (where it's quite unclear how you'd treat insects), and 25% spread across other views, which mostly just points towards not being super-fanatical about any of the others. So something like 30% of me thinks insect suffering is a big deal, which is enough for me to take it seriously but not enough for me to drop the stuff that more like 75% of me thinks is a big deal; in other words I think it's moderately important.
I don't know what my actual numbers are, and I'm not sure each of these views is really what the respective philosophy would say about insect welfare; I'm just saying, it's easy in this kind of framework to wind up having lots of moderate priorities that each seem extremely important on certain ethical views.
I think it's reasonable to say "I put some credence on moral views that imply insect suffering is very important and some credence on moral views that imply it's not important; all things considered, I think it's moderately important."
A couple other comments are gesturing at this, but this logic could be applied to all kinds of things: existential risk is probably "either" extremely important or not at all important if you plug different empirical and ethical views into a formula and trust the answer; likewise present-day global health, or political polarization, or developed-world mental health, etc. Eventually, you can either (1) go all in on a particular ethical and meta-ethical theory, (2) be inconsistent, or (3) combine all these considerations into a balanced whole, in which probably a lot of things that pencil as "extremely important" in some views wind up being a moderately high priority. I don't think it's obvious that (3) is right, but this post does not make an argument that (1) is right, and I think the burden of proof is on the side arguing explicitly against moderation and intuitive conclusions.
One reason to think (3) is right is to look at the track records. You say you "cannot be a moderate Christian." I don't think religious fundamentalists have morally outperformed religious moderates. There are lots of people who take religious values seriously but not fanatically; some of the leaders of the world's greatest social movements used a lot of religious thinking and rhetoric without trying to follow every letter of the Bible.
I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren't statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I'm not sure -- I really think the timeline models have been useful for our community's strategy and for informing other audiences).
But also, they only sometimes create a sense of panic; I could see specificity being helpful for people getting out of the mode of "it's vaguely inevitable, nothing to be done, just gotta hope it all works out." (Notably the timeline models sometimes imply longer timelines than the vibes coming out of the AI companies and Bay Area house parties.)
There's a grain that I agree with here, which is that people excessively plan around a median year for AGI rather than a distribution for various events, and that planning around that kind of distribution leads to more robust and high-expected-value actions (and perhaps less angst).
However, I strongly disagree with the idea that we already know "what we need." Off the top of my head, several ways narrowing the error bars on timelines -- which I'll operationalize as "the distribution of the most important decisions with respect to building transformative AI" -- would be incredibly useful:
I also strongly disagree with the framing that the important thing is us knowing what we know. Yes, people who have been immersed in AI content for years often believe that very scary and/or awesome AI capabilities are coming within the decade. But most people, including most of the people who might take the most important actions, are not in this category and do not share this view (or at least don't seem to have internalized it). Work that provides an empirical grounding for AI forecasts has already been very useful in bringing attention to AGI and its risks from a broader set of people, including in governments, who would otherwise be focused on any one of the million other problems in the world.
Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)
In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):
In favor of giving later:
Are you a US resident who spends a lot of money on rideshares + food delivery/pickup? If so, consider the following:
I think the opposite might be true: when you apply it to broad areas, you're likely to mistake low neglectedness for a signal of low tractability, and you should just look at "are there good opportunities at current margins." When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
- Would it be good to solve problem P?
- Can I solve P?
What is gained by adding the third thing? If the answer to #2 is "yes," then why does it matter if the answer to #3 is "a lot," and likewise in the opposite case, where the answers are "no" and "very few"?
Edit: actually yeah the "will someone else" point seems quite relevant.
Fair enough on the "scientific research is super broad" point, but I think this also applies to other fields that I hear described as "not neglected" including US politics.
Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it's just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it's hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).
I basically agree with this (and might put the threshold higher than $100, probably much higher for people actively pursuing policy careers), with the following common exceptions:
It seems pretty low-cost to donate to a candidate from Party X if...