Hide table of contents

(Written up quite quickly — policy suggestions are intended to be illustrative, not detailed proposals. Feedback greatly appreciated!)

Introduction

In the past few days (!) we've seen a number of tools and frameworks built around language models, delivering impressive new capabilities. The ease and speed with which these improvements have been built suggests a significant capability overhang within foundation models. Extrapolating, it seems likely that further overhang remains, such that post-hoc enhancements (i.e. scaffolding built around existing models) could significantly increase the reliability and usefulness of these models even without major improvements to the models themselves. As this usefulness becomes more apparent, an influx of attention and money on AI productization may make it increasingly difficult to shape the trajectory of technological development and to prevent race dynamics. While I'd love to see this technology developed and used for the good, I do not expect global capital markets to handle the moral hazard associated with AI risk particularly well. Even for relatively low credences on catastrophic AI risk, the prospect of trillions of new dollars being poured indiscriminately into developing profitable AI use cases should be cause for concern. 

In an earlier post, I outlined the case for capping AI generated profits after some relatively high threshold — more or less, a broad and government enforced Windfall Clause. TLDR: this seems minimally disruptive in the short run, fits into existing tax frameworks easily, disincentivizes AGI development on the margin, and provides some democratic channels for wealth distribution if AI companies become wildly profitable through the development of AGI (in worlds where this does not cause calamity).

However, this is a fairly light-handed policy, standing far on the other end of the spectrum from proposals like the one in Eliezer Yudkowsky's TIME piece. The response to that proposal (at least on my Twitter timeline) has been very divided. I've seen a lot of people appalled by the draconian surveillance measures and implicit violence required; others have voiced support, arguing that the cost of allowing capabilities to progress without significant improvements to our safety toolset is far higher than the costs to our personal freedoms.

Here, I'd like to extend my policy proposal to something more impactful. The overarching goal of these policies is to demonetize profits from AI, in the hopes that the remaining incentives to develop AI are differentially focused on safety and leave actors better able to coordinate. To unpack the motivations further:

  • Policymakers are familiar with the premise of taxation as a tool to help actors internalize externalities. These kinds of proposals (appropriately posed) could fall well-within the Overton window on policy.
  • Profit motives are at the core of current race dynamics, and eliminating them has the potential to greatly improve our outlook. Even in a world where AI has been made unprofitable to investors, there may still be pressures to make risky capability breakthroughs (e.g. researchers trying to get high profile publications). However, I suspect these pressures would be considerably less.
  • While this direction is less secure than the kind of lockdowns proposed by Yudkowsky, it seems clear to me that profit incentives play a very large role in AI development and implementation. The hope is that we could still get a large share of the safety benefits of a total moratorium with significantly lower social costs and norm-breaking. 
  • Concerns around personal liberties are much less salient if we restrict our policies to acting only on corporations. We routinely place heavy restrictions and shift incentives for companies in the existing policy regime.
  • In cooling off investment to the broader AI ecosystem, we would want to ensure that this does not shut down existing safety work. To this end, revenues from new taxes could directly fund non-profit research in safety, alignment, and safe but useful tools (e.g. Khanmingo). Given how few existing safety researchers there are, it should be easy to increase the total spending on this type of research through such revenues.

 

Policy options

To reiterate, I propose that we look at options to significantly reduce the amount of profit-seeking investments flowing into AI development. Note that I am explicitly omitting non-profit R&D, primarily because the cost-benefit on regulating and taxing this seems far less certain than it does for profit-seeking entities. 

Taxing profits generated through the use of AI

  • A tax on AI-generated profits could be applied either to companies building or distributing AI themselves, or to any business using AI models of certain classes. This tax can be scaled as desired, making it a flexible tool to influence the market.
  • The intent here is both to slow the roll-out of AI into every corner of the economy, and to minimize investment dollars chasing profit and competing for market share. 
    • The former seems likely to make future capability advancements propagate more suddenly.
    • The latter plays a significant role in unsafe race dynamics.
  • Using tax revenues to directly fund grants for non-profit safety research would help offset job losses in safety research that may result from decreased investment. I expect that the revenues generated from even a relatively modest tax could likely fund the salaries of all current safety researchers working at for-profit companies.
    • To provide context, as of the most recent estimate I could find, there were around 400 active safety researchers in 2022. Assuming these researchers are paid on average $400k/year,[1] funding these salaries would cost around $160M, around 0.04% of current US corporate tax revenues.
    • Of course the full picture is a lot more complicated than this (see the second point under Concerns), but given that these new AI tools are likely to produce a lot of new wealth, even modest taxes have the potential to increase safety funding considerably.

 

Making it difficult to use advanced AI models in commercial products

  • An alternative to taxation could be direct regulation intended to prevent commercial investment in unsafe directions. 
    • For instance, the FTC could stipulate that it is incumbent on companies to prove that their AI usage meets a set of stringent safety criteria. I'd like to see rules about the amount of compute and data used to train the model, the model's access to the internet, security procedures for preventing model weights from being exposed and spread, etc. My focus here is on LLMs, but I could imagine additional rules for other types of AI, e.g. drug discovery models.
    • In addition to focus on safety criteria, I could imagine that policies enforcing strict consumer privacy laws might make it difficult for companies to legally market models trained on public data. This seems like a riskier bet, but plausibly easier to find support for.
  • Again, this could apply to the companies producing AI, and/or any companies using AI in their products.
  • I suspect that it would be difficult (and probably undesirable) to prevent the usage of current generation models. While these products may create their own challenges, they are probably not major existential risks.
    • I have a weakly-held belief that addressing these new challenges (such as AI-generated disinformation, security and privacy concerns, etc.) will provide society with a comparatively low-stakes way to come to terms with the power of these models. Concrete examples of AI causing significant social impacts could help coordinate the world in addressing future issues.
    • That said, it also seems quite possible that the social impacts of AI will harm our ability to enact good policies more than they help in making the situation salient.


Concerns

There are of course a number of uncertainties here:

  • In general, enacting a policy which does more good than harm seems to be a tough bar to meet. Even leading experts in the field of AI are unable to reach agreement on core questions like "should we slow down AI progress." Since details probably matter a lot here, it seems quite possible that asking civil servants and politicians with less complete knowledge of the situation to enact a policy could go poorly. 
  • Demonetizing the AI industry completely will be very disruptive, and the net effect has potential to be negative. If public funding cannot sustain funds and a productive environment for safety researchers to work in, these policies have the potential to seriously slow the trajectory of safety research.
    • Moving toward a publicly funded model of AI safety research brings with it the usual concerns about centralized grantmaking.
  • Unilaterally implementing these policies will not prevent AI race dynamics from continuing elsewhere. 
  • The more disruptive the policy, the more we will need international coordination to prevent tax and regulatory evasion. 
  • There are arguments to be made for having a single entity taking the lead on AGI development, for example in order to take a pivotal action to bring humanity out of the current period of instability. Currently, this seems to be OpenAI, but it is unclear how the situation would change if new policies cut off their funding mechanism severely.

Conclusion

The policy options I presented above are off-the-cuff proposals, and they need a lot of refinement. As with most policy, the devil is in the details. However, for people who are seriously concerned about the future of AI development, reducing or eliminating the forces of profit motives seems like a useful way to improve coordination and buy ourselves some time. Regardless of whether this specific policy is passed, I believe we are in desperate need of better ways to communicate the problems at the core of alignment and the uncertainties that researchers have to policy makers and voters. Sam Bowman's new paper, Eight Things to Know about Large Language Models, is a stellar example of giving policy makers the information they need to make good decisions, and I hope to see more in this direction. 

As a final request for feedback (particularly for people with real-world policy experience): 

  • How feasible does the most ambitious version of this sound (e.g. 80% or higher taxes on AI generated profits, or significant restrictions on commercial usage)?
  • How feasible does a limited version seem (say, a 20% tax on profits generated through AI products)?
  • What components of the proposal seem most difficult to find support for?
  1. ^

    I'm uncertain whether this is reasonable. Salaries in ML seem to start at around $180k and reach into the millions.

19

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

I think this is the best policy I've seen so far, in that it reduces both near-term and far term risks, and has a nonzero chance of actually being implemented in some form. 

I think the most plausible paths for AI catastrophe involve taking over large corporations, so anything that limits corporate power probably helps. 

Curated and popular this week
Relevant opportunities