This week, we are releasing new research on advanced artificial intelligence (AI), the opportunities and risks it presents, and the role donations can play in positively steering it's development.
As with our previous research investigating areas such as nuclear risks and catastrophic biological risks, our report on advanced AI provides a comprehensive overview of the landscape, outlining for the first time how effective donations can cost-effectively reduce risks.
You can find the technical report as a PDF here, or read a condensed version here.
In brief, the key points from our report are:
- General, highly capable AI systems are likely to be developed in the next couple of decades, with the possibility of emerging in the next few years.
- Such AI systems will radically upend the existing order - presenting a wide range of risks, scaling up to and including catastrophic threats.
- AI companies - funded by big tech - are racing to build these systems without appropriate caution or restraint given the stakes at play.
- Governments are under-resourced, ill-equipped and vulnerable to regulatory capture from big tech companies, leaving a worrying gap in our defenses against dangerous AI systems.
- Philanthropists can and must step in where governments and the private sector are missing the mark.
- We recommend special attention to funding opportunities to (1) boost global resilience, (2) improve government capacity, (3) coordinate major global players, and (4) advance technical safety research.
Funding Recommendations
Alongside this report, we are sharing some of our latest recommended high-impact funding opportunities: The Centre for Long-Term Resilience, the Institute for Law and AI, the Effective Institutions Project and FAR AI are four promising organizations we have recently evaluated and recommend for more funding, covering our four respective focus areas. We are in the process of evaluating more organizations, and hope to release further recommendations.
Furthermore, the Founders Pledge’s Global Catastrophic Risks Fund supports critical work on these issues. If you would like to make progress on a range of catastrophic risks - including from advanced AI - then please consider donating to the Fund!
About Founders Pledge
Founders Pledge is a global non-profit empowering entrepreneurs to do the most good possible with their charitable giving. We equip members with everything needed to maximize their impact, from evidence-led research and advice on the world’s most pressing problems, to comprehensive infrastructure for global grant-making, alongside opportunities to learn and connect. To date, they have pledged over $10 billion to charity and donated more than $950 million. We’re grateful to be funded by our members and other generous donors. founderspledge.com
I haven't seen the phrase "Advanced Artificial Intelligence" in use before. How does AAI differ from Frontier AI, AGI, and Artificial Superintelligence?
Thank you.
Separately, I just read your executive summary re the nuclear threat; something that I think is particularly serious and worthy of effort. It read to me like the report suggests that there is such a thing as a limited nuclear exchange. If that's correct, I would offer that you're doing more harm than good by promoting that view which unfortunately some politicians and military officers share.
If you have not yet read, or listened to, Nuclear War: A Scenario by Anne Jacobsen, I highly encourage you to do so. Your budget for finding ways to pr... (read more)