Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.
What is the Windfall Clause?
The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By “extremely large profits,” or “windfall,” we mean profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities. It is unlikely, but not implausible, that such a windfall could occur; as such, the Windfall Clause is designed to address a set of low-probability future scenarios which, if they come to pass, would be unprecedentedly disruptive. By “ex ante,” we mean that we seek to have the Clause in effect before any individual AI firm has a serious prospect of earning such extremely large profits. “Donate” means, roughly, that the donated portion of the windfall will be used to benefit humanity broadly.
Motivations
Properly enacted, the Windfall Clause could address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable. It provides AI labs with a credible, tangible mechanism to demonstrate their commitment to pursuing advanced AI for the common global good. Finally, it provides a concrete suggestion that may stimulate other proposals and discussion about how best to mitigate AI-driven disruption.
Motivations Specific to Effective Altruism
Most EA AI resources to-date have been focused on extinction risks from AI. One might wonder whether the problems addressed by the Windfall Clause are really as pressing as these.
However, a long-term future in which advanced forms of AI like AGI or TAI arrive but primarily benefit a small portion of humanity is still highly suboptimal. Failure to ensure advanced AI benefits all could "drastically curtail" the potential of Earth-originating intelligent life. Intentional or accidental value lock-in could result if, for example, a TAI does not cause extinction but is programmed to primarily benefit shareholders of the corporation that develops it. The Windfall Clause thus represents a legal response to this sort of scenario.
Limitations
There remain significant unresolved issues regarding the exact content of an eventual Windfall Clause, and the way in which it would be implemented. We intend this report to spark a productive discussion, and recommend that these uncertainties be explored through public and expert deliberation. Critically, the Windfall Clause is only one of many possible solutions to the problem of concentrated windfall profits in an era defined by AI-driven growth and disruption. In publishing this report, our hope is not only to encourage constructive criticism of this particular solution, but more importantly to inspire open-minded discussion about the full set of solutions in this vein. In particular, while a potential strength of the Windfall Clause is that it initially does not require governmental intervention, we acknowledge and are thoroughly supportive of public solutions.
Next steps
We hope to contribute an ambitious and novel policy proposal to an already rich discussion on this subject. More important than this policy itself, though, we look forward to continuously contributing to a broader conversation on the economic promises and challenges of AI, and how to ensure AI benefits humanity as a whole. Over the coming months, we will be working with the Partnership on AI and OpenAI to push such conversations forward. If you work in economics, political science, or AI policy and strategy, please contact me to get involved.
I imagined so; but the idea just kept coming to my head, and since I hadn't seen it explicitly stated, I thought it could be worth mentioning.
I agree that, with current legislation, this is likely so.
But let me share a thought: even though we don't have hedge for when one company succeeds so well it ends up dominating the whole market (and ruining all competitors in the process), we do have some compensation schemes (based on specific legislation) for when a company fails, like deposit insurance. The economic literature usually presents it as a public good (they'd decrease the odds of a bank run and so increase macroeconomic stability), but it was only accepted by the industry because it solved a lemons problem. Even today, the "green swan" (s. section 2) talk in finances often appeals to the risk of losses in a future global crisis (the Tragedy of the Horizon argument). My impression is that an innovation in financial regulation often starts with convincing banks and institutions that it's in their general self-interest, and then it will become compulsory only to avoid free-riders.
(So, yeah, if tech companies get together with the excuse of protecting their investors (& everyone else in the process) in case of someone dominating the market, that's collusion; if banks do so, it's CSR)
(epistemic status about the claims on deposit insurance: I shoud have made a better investigation in economic history, but I lack the time, the argument is consistent, and I did have first hand experience with the creation of a depositor insurance fund for credit unions - i.e., it didn't mitigate systemic risk, it just solved depositors risk-aversion)