Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.
What is the Windfall Clause?
The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By “extremely large profits,” or “windfall,” we mean profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities. It is unlikely, but not implausible, that such a windfall could occur; as such, the Windfall Clause is designed to address a set of low-probability future scenarios which, if they come to pass, would be unprecedentedly disruptive. By “ex ante,” we mean that we seek to have the Clause in effect before any individual AI firm has a serious prospect of earning such extremely large profits. “Donate” means, roughly, that the donated portion of the windfall will be used to benefit humanity broadly.
Motivations
Properly enacted, the Windfall Clause could address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable. It provides AI labs with a credible, tangible mechanism to demonstrate their commitment to pursuing advanced AI for the common global good. Finally, it provides a concrete suggestion that may stimulate other proposals and discussion about how best to mitigate AI-driven disruption.
Motivations Specific to Effective Altruism
Most EA AI resources to-date have been focused on extinction risks from AI. One might wonder whether the problems addressed by the Windfall Clause are really as pressing as these.
However, a long-term future in which advanced forms of AI like AGI or TAI arrive but primarily benefit a small portion of humanity is still highly suboptimal. Failure to ensure advanced AI benefits all could "drastically curtail" the potential of Earth-originating intelligent life. Intentional or accidental value lock-in could result if, for example, a TAI does not cause extinction but is programmed to primarily benefit shareholders of the corporation that develops it. The Windfall Clause thus represents a legal response to this sort of scenario.
Limitations
There remain significant unresolved issues regarding the exact content of an eventual Windfall Clause, and the way in which it would be implemented. We intend this report to spark a productive discussion, and recommend that these uncertainties be explored through public and expert deliberation. Critically, the Windfall Clause is only one of many possible solutions to the problem of concentrated windfall profits in an era defined by AI-driven growth and disruption. In publishing this report, our hope is not only to encourage constructive criticism of this particular solution, but more importantly to inspire open-minded discussion about the full set of solutions in this vein. In particular, while a potential strength of the Windfall Clause is that it initially does not require governmental intervention, we acknowledge and are thoroughly supportive of public solutions.
Next steps
We hope to contribute an ambitious and novel policy proposal to an already rich discussion on this subject. More important than this policy itself, though, we look forward to continuously contributing to a broader conversation on the economic promises and challenges of AI, and how to ensure AI benefits humanity as a whole. Over the coming months, we will be working with the Partnership on AI and OpenAI to push such conversations forward. If you work in economics, political science, or AI policy and strategy, please contact me to get involved.
(Epistemic status: there must be some flaw, but I can't find it.)
Sure. But, let me be clearer: what drew my attention is that, apparently, there seems to be no down-side for a company to do this ASAP. My whole point:
First, consider the “simple” example where a signatory company promises to donate 10% of its profits from a revolutionary AI system in 2060, a situation with an estimated probability of about 1%; the present value of this obligation would currently amount to U$650 million (in 2010 dollars). This seems a lot; however, I contend that, given investors’ hyperbolic discount, they probably wouldn’t be very concerned about it – it’s an unlikely event, to happen in 40 years; moreover, I’ve checked with some accountants, and this obligation would (today) be probably classified as a contingent liability of remote possibility (which, under IAS 37, means it wouldn’t impact the company’s balance sheet – it doesn’t even have to be disclosed in its annual report). So, I doubt such an obligation would negatively impact a company’s market value and profits (in the short-term); actually, as there’s no “bad marketing”, it could very well increase them.
Second (all this previous argument was meant to get here), would it violate some sort of fiduciary duty? Even if it doesn’t affect present investors, it could affect future ones: i.e., supposing the Clause is enforced, can these investors complain? That’s where things get messy to me. If the fiduciary duty assumes a person-affecting conception of duties (as law usually does), I believe it can’t. First, if the Clause were public, any investor that bought company shares after the promise would have done it in full knowledge – and so wouldn’t be allowed to complain; and, if it didn’t affect its market value in 2019, even older investors would have to face the objection “but you could have sold your shares without loss.” Also, given the precise event “this company made this discovery in such-and-such way”, it’s quite likely that the event of the promise figures in the causal chain that made this precise company get this result – it certainly didn’t prevent it! Thus, even future investors wouldn’t be allowed to complain.
There must be some flaw in this reasoning, but I can’t find it.
(Could we convince start-ups to sign this until it becomes trendy?)