This is a special post for quick takes by nonn. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Some fraction of people who don't work on AI risk cite "wanting to have more certainty of impact"as their main reason. But I think many of them are running the same risk anyway: namely, that what they do won't matter because transformative AI will make their work irrelevant, or dramatically lower value.

This is especially obvious if they work on anything that primarily returns value after a number of years. E.g. building an academic career or any career where most impact is realized later, working toward policy changes, some movement-building things, etc.

But also applies somewhat to things like nutrition or vaccination or even preventing deaths, where most value is realized later (by having better life outcomes, or living an extra 50 years). Though this category does still have certainty of impact, just the amount of impact might be cut by whatever fraction of worlds are upended in some way by AI. And this might affect what they should prioritize... e.g. they should prefer saving old lives over young ones, if the interventions are pretty-close on naive effectiveness measures.

Why should they prefer saving old lives over young ones? How does transformative AI affect that? Even if transformative AI quickly cures aging, I don't understand why it would be preferable to save old lives over young ones in advance of transformative AI, all else being equal.

Assuming two interventions are around similarly effective in life-years saved, interventions saving old lives must (necessarily) save more lives in the short run. E.g. save 4 lives granting 10 life-years v.s. save 1 life granting 40 life years.

Uh huh… I doubt you’d find shovel-ready projects, though.

I don't know what you mean? You can look at existing interventions that primarily help very young people (neonatal or childhood vitamin supplementation) v.s. a comparably-effective interventions that target adults or older people (e.g. cash grants, schistosomiasis)

There are multiple GiveWell charities in both categories, so this is just saying you should weight towards the ones that target older folks by maybe a factor of 2x or more, v.s. what givewell says (they assume the world won't change much)

Still wondering why I never see moral circle expansion advocates make the argument I made here

That argument seems to avoid the suffering-focused problem where moral circle expansion doesn't address, or might even make worse, the worst suffering scenarios for the future (e.g. threats in multipolar futures). Namely, the argument I linked says despite potentially increasing suffering risk, it also increases the value of good futures enough to be worth it

TBC, I don't hold this view because I believe we need a solid "great reflection" to achieve the best futures anyway, and that such a reflection is extremely likely to produce the relevant moral circle expansion

More from nonn
Curated and popular this week
Relevant opportunities