This week, we are releasing new research on advanced artificial intelligence (AI), the opportunities and risks it presents, and the role donations can play in positively steering it's development.
As with our previous research investigating areas such as nuclear risks and catastrophic biological risks, our report on advanced AI provides a comprehensive overview of the landscape, outlining for the first time how effective donations can cost-effectively reduce risks.
You can find the technical report as a PDF here, or read a condensed version here.
In brief, the key points from our report are:
- General, highly capable AI systems are likely to be developed in the next couple of decades, with the possibility of emerging in the next few years.
- Such AI systems will radically upend the existing order - presenting a wide range of risks, scaling up to and including catastrophic threats.
- AI companies - funded by big tech - are racing to build these systems without appropriate caution or restraint given the stakes at play.
- Governments are under-resourced, ill-equipped and vulnerable to regulatory capture from big tech companies, leaving a worrying gap in our defenses against dangerous AI systems.
- Philanthropists can and must step in where governments and the private sector are missing the mark.
- We recommend special attention to funding opportunities to (1) boost global resilience, (2) improve government capacity, (3) coordinate major global players, and (4) advance technical safety research.
Funding Recommendations
Alongside this report, we are sharing some of our latest recommended high-impact funding opportunities: The Centre for Long-Term Resilience, the Institute for Law and AI, the Effective Institutions Project and FAR AI are four promising organizations we have recently evaluated and recommend for more funding, covering our four respective focus areas. We are in the process of evaluating more organizations, and hope to release further recommendations.
Furthermore, the Founders Pledge’s Global Catastrophic Risks Fund supports critical work on these issues. If you would like to make progress on a range of catastrophic risks - including from advanced AI - then please consider donating to the Fund!
About Founders Pledge
Founders Pledge is a global non-profit empowering entrepreneurs to do the most good possible with their charitable giving. We equip members with everything needed to maximize their impact, from evidence-led research and advice on the world’s most pressing problems, to comprehensive infrastructure for global grant-making, alongside opportunities to learn and connect. To date, they have pledged over $10 billion to charity and donated more than $950 million. We’re grateful to be funded by our members and other generous donors. founderspledge.com
(I have not read the full report yet, I'm merely commenting on a section in the condensed report.)
This argument seems wrong to me. While AI does pose negative externalities—like any technology—it does not seem unusual among technologies in this specific respect (beyond the fact that both the positive and negative effects will be large). Indeed, if AI poses an existential risk, that risk is borne by both the developers and general society. Therefore, it's unclear whether there is actually an incentive for developers to dangerously "race" if they are fully rational and informed of all relevant facts.
In my opinion, the main risk of AI does not come from negative externalities, but rather from a more fundamental knowledge problem: we cannot easily predict the results of deploying AI widely, over long time horizons. This problem is real but it does not by itself imply that individual AI developers are incentivized to act irresponsibly in the way described by the article; instead, it implies that developers may act unwisely out of ignorance of the full consequences of their actions.
These two concepts—negative externalities, and the knowledge problem—should be carefully distinguished, as they have different implications for how to regulate AI optimally. If AI poses large negative externalities (and these are not outweighed by their positive externalities), then the solution could look like a tax on AI development, or regulation with a similar effect. On the other hand, if the problem posed by AI is that it is difficult to predict how AI will impact the world in the coming decades, then the solution plausibly looks more like investigating how AI will likely unfold and affect the world.