AI entirely aside, has anyone seen any recent and published net assessments of existential/catastrophic(ish) risks? Why are there so few?
Specifically for this forum, what would very recent assessments say about new opportunities for moral actors?
A lot of risk assessment and then mitigation work has historically been done inside governments but is not widely shared (for a few decades at least). x-risk is not unaffected as geopolitical constraints have shifted and actions that once seemed politically impossible may now be on the table - some climate stabilisation plans might fit into a paragraph if acts of war or genocide are no longer dealbreakers.
In a Trump2 world (with other illiberal shifts globally), what are organisations or individuals now more able and incentivised to do in their own interests for their own reasons? Are there good new solutions in addition to new bad solutions?
Is anyone (else) thinking around this? (without the security clearances that keep things secret)
(I’m posting this now as a bunch of people will shortly be nearby for EAG London)
My team has published estimates for precursors of xrisk and has weekly updates that usually contain forecasts. Could be of interest, not sure if that's the kind of thing you are asking about.
It's close to that - as risks go up and down, what adjacent possible new innovations could make those risks go down (or up!) further? What do you see in your updates that could move solutions between "possible" and "done"? Are there any public assessments of what can be better in the world?
e.g. your Asteroid estimate is 0.02%/decade, but NASA DART shows when one shows up we can redirect if we've paid enough attention to see the asteroid coming in enough time (it's a bit more complicated than that, but not much). Humanity has gone from Asteroids being an inevitable (~100%) extinction event over a long enough time horizon to being largely "solved" in scientific terms (ie 0% if we systematically look and have a spare DART mission in a cupboard somewhere, which has an engineering cost of $x). Does anyone look at that transition for more risks?
Volcanoes or aliens aren't in that category, but AMR etc appears on some of risk lists (not in your scope). Is there another risk that was catastrophic last decade which can become mitigated next decade?
So, in your team's case, does anyone take your forecasts/updates and put them next to tech/innovation/change assessments and gone "someone couldn't have done a DART mission five years ago, but now this thing is possible..."