MB

Matt Boyd

377 karmaJoined

Bio

Health, technology and catastrophic risk - New Zealand https://adaptresearchwriting.com/blog/

Comments
51

The framework of 'Prevention', 'Detection', and 'Response' as described in the EOI form leaps from an early warning system to sophisticated technological response approaches. 

However, there is a step in between these, which is best described as 'exclusion/elimination', where jurisdictions with geographical and governance favourable conditions can keep a threat out. This approach could be used by the many island nations of the world, which make up a large fraction of human population. I find that continent dwellers underplay the role of islands in solutions and funding decisions, eg Japan, Australia, Taiwan, New Zealand making up several percentage points of global population. 

Jurisdictions that used an exclusion/elimination strategy had net negative age-standardised cumulative excess mortality in 2020-21 through the Covid-19 pandemic (see papers below). With pre-planned border protocols it is possible to keep a pathogen out once early detection flags the risk. Exclusion was also cost-effective, with broadly no statistical difference in GDP growth (admittedly a blunt metric, but ripe now for more in-depth analysis) between jurisdictions that did/didn't use this approach. 

Unfortunately, there was a lot of too-eager science done in the early pandemic that incorrectly flagged things like democracy, exclusion, border closures, preparedness based on eg GHSI metrics, etc as positively correlated with deaths, when actually, after the dust has settled and with gold standard metrics, appropriate statistical transformations and so on, rather than messy in-the-heat-of-the-moment data, these things are all protective. Much more research is needed to reverse the harmful science that was done in haste, and to provide decision makers with robust information to plan future response strategies.

I would strongly advocate for including a work stream on 'strategies' not just 'technologies' in response, eg exclusion/elimination along with your other projects. See two accepted and forthcoming peer-reveiwed papers of ours which give the flavour of this issue: 
1. Boyd, M., Baker, M. G., Kvalsvig, A., & Wilson, N. (2025). Impact of Covid-19 Control Strategies on Health and GDP Growth Outcomes in 193 Sovereign Jurisdictions. Forthcoming in PLOS Global Public Health. Preprint here: https://www.medrxiv.org/content/10.1101/2025.04.08.25325452v1
2. Boyd M, Baker M, Wilson N. (2025). Global Health Security Index and Covid-19 pandemic mortality 2020–2021: A comparative study of islands and non-islands across 194 jurisdictions. Forthcoming in BMJ Open. Preprint here: https://www.medrxiv.org/content/10.1101/2024.09.02.24312964v2 

We are currently working on the interaction of democratic institutions, governance quality and exclusion/elimination - which are a critical piece of the puzzle too. 

More than happy to chat

Many thanks Vasco, and thanks for the additional data for context too. I think a big chunk of the UN GAR 2025's '$2 trillion' cost impact was attributed to things like ecosystem destruction from droughts. Which that report argued had not been properly costed in previous calculations. I take your point about the fact that death rate from equivalent disasters today vs in the past is lower now (with correspondingly lower monetized harm). Cheers! 

Thanks for posting this interesting write-up! I know you said you posted only as part of the Amnesty, but I've found the information you've compiled here useful to inform other ongoing projects. 

Matt Boyd
3
2
0
79% agree

More tractable, necessary precondition 

Similarly to Owen's comment, I also think that AI and nuclear interact in important ways (various pathways to destabilisation that do not necessarily depend on AGI). It seems that many (most?) pathways from AI risk to extinction lead via other GCRs eg pandemic, nuclear war, great power war, global infrastructure failure, catastrophic food production failure, etc. So I'd suggest quite a bit more hedging with focus on these risks, rather than putting all resources into 'solving AI' in case that fails and we need to deal with these other risks. 

Thanks for posting this. I'll comment on the bit about New Zealand's food production in nuclear winter conditions. Although the paper cited concludes there is potential for production to feed NZ's population, this depends on there being sufficient liquid fuel to run agricultural equipment and NZ imports 100% of it's refined fuels. Trade in fuel would almost certainly collapse in a major nuclear war. Without diesel, or imported fertiliser and agrichemicals, yield would be much lower. Distribution would be difficult too. See this paper: https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.14297 Ideally, places like NZ would establish the potential to produce fuel locally, eg biofuels, in case of this scenario. If restricted to use in agriculture and food transport, with optimised cropping, surprisingly little biofuel would be needed. This kind of contingency planning could avert famine, and any associated disease and potential conflict. I agree that the existential risk is very low. But it is probably slightly higher when considering these factors. 

Interesting juxtaposition: 

It promotes the idea of spending considerable state resources, i.e. tax payer money, for building massive computing clusters in the US, while at the same time denying the knowledge required to build AI models from non-American people and companies.

With the following here

I'm Leopold Aschenbrenner. I recently founded an investment firm focused on AGI.

As you say, the whole set of writings has a propaganda (marketing) tone to it, and a somewhat naive worldview. And the temporal coincidence of the essays and the investment startup are likely not accidental. I'm surprised it got the attention it did given, as you say, the sort of red-flag writing style of which we are taught to be skeptical. Any presentation of these essays should be placed alongside the kind of systematic skepticism of eg Gary Marcus et al. for readers to draw their own conclusions. 

This all seems extremely optimistic. I don't see the words 'environment', 'externalities', 'biodiversity', or 'pollution' mentioned at all, let alone 'geopolitics', 'fragmentation', 'onshoring', 'deglobalisation' or 'supply chains'. And nothing about energy demands and cost of extraction. Based on upbeat consultancies biased models that always conclude things are good for business? I'll be extremely surprised if this 'lower bound' scenario is even the upper. 

Hopefully everyone who thinks that AI is the most pressing issue takes the time to write (or collaborate and write) their best solution in 2000 words and submit to the UN's recent consultation call: https://dig.watch/updates/invitation-for-paper-submissions-on-worldwide-ai-governance A chance to put AI in the same global governance basket as biological and nuclear weapons. And potential high leverage from a relatively small task (Deadline 30 Sept). 

Difficult to interpret a lot of this as it seems to be a debate between potentially biased pacifists, and potentially biased military blogger. As with many disagreements the truth is likely in the middle somewhere (as Rodriguez noted). Need new independent studies on this that are divorced from the existing pedigrees. That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment. 

Load more