In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.
My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.
A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom.
Besides helping the affected person, this would hopefully:
- Signal to organizations that giving users access to AI is risky business
- Scare away new players in the market
- Scare away investors
- Give the AI company in question a bad rep, and sway the public opinion against AI companies in general
- Limit the ventures large organizations would be willing to jump into
- Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)
All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.
Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).
This also feels like the kind of thing where EA wouldn't necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc... we could just fund some seminal early cases and figure out what a general "playbook" should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems. Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we'd established enough of a "playbook" for how such cases work.
One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout. This could set the tone for the field in an especially helpful direction.