E.g. What is the expected effect on existential risk of donating to one of GiveWell's top charities?
I've asked myself this question several times over the last few years, but I've never put a lot of thought into it. I've always just assumed that at the very least it would not increase existential risk.
Have any analyses been done on this?
I've been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn't critically important. I am asserting that if we nuke ourselves first, it probably won't matter.
You really don't need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.
You might be interested in reading OpenPhil's shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.
OpenPhil doesn't provide recommendati... (read more)