If I were committed to allocating $20M starting in 2026, the key uncertainties I would want resolved before deciding how to give fall into two clusters: questions about the near term tractability of global health interventions and questions about the risk landscape surrounding increasingly capable AI systems. What I find most neglected is research on how these domains interact.
1. How do we compare the marginal value of global health scaling vs. AI safety risk mitigation, given uncertainty around timelines and tractability?
Much of the Global Health and Development community has built strong methodologies for cost effectiveness modeling and iteration. AI safety, by contrast, remains dominated by high uncertainty and expert priors. I would want to better understand whether some AI governance or alignment work can be made more measurable such that it can be compared, even imperfectly, to global health opportunities like malaria control, vaccine delivery, or unconditional cash transfers. This feels especially relevant given @Open Philanthropy 's portfolio across both spaces.
2. What are the highest leverage AI related opportunities for improving global health outcomes?
Even before AGI, frontier systems will shape how health systems operate. We are already seeing early benefits (diagnostics, forecasting, logistics) and risks (deepfake misinformation, automated biodesign, widening inequalities). I would want rigorous analysis of whether supporting AI policy capacity in LMICs, or designing AI systems to specifically serve low resource contexts, could outperform traditional global health spending on a per dollar basis. @Evidence Action style evidence based delivery may become relevant here.
3. What is the risk profile of AI deployed in global health, and does it create new systemic vulnerabilities?
For example, how does reliance on AI created surveillance or epidemiological systems change the risk of catastrophic misuse or failure? Could global health deployments inadvertently increase existential bio risk? This feels like a question currently falling between cause areas and I am not sure any actor is systematically prioritizing it.
4. What are the optimal philanthropic strategies if AI timelines shorten significantly?
Should a donor pivot from long term global health institution building to technical alignment research or policy advocacy? Or is it more valuable to improve the resilience and welfare of populations who will otherwise be least protected from AI driven shocks? This is a core strategic question for anyone trying to maximize expected value across time.
If I had to name the most important meta question: what frameworks allow us to compare uncertain, systemic, long tail risk reduction (e.g. alignment, global governance) with concrete, short timeline health and development interventions without resorting to hand waving or relying purely on moral intuitions?
I think that expanding cross cause prioritization frameworks to include AI safety explicitly, and especially to explore AI global health interactions, is a major gap in current EA work. If I had $20M and time to wait, the research agenda I’d want to commission would sit precisely at that intersection.