In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.
My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.
A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom.
Besides helping the affected person, this would hopefully:
- Signal to organizations that giving users access to AI is risky business
- Scare away new players in the market
- Scare away investors
- Give the AI company in question a bad rep, and sway the public opinion against AI companies in general
- Limit the ventures large organizations would be willing to jump into
- Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)
All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.
There are definitely a lot of legal angles that AI will implicate, although some of the examples you provided suggest the situation is more mixed:
More fundamentally, I don't think it will be OpenAI, etc. who are providing most of these services. They will license their technology to other companies who will actually provide the services, and those companies will not necessarily have the deep pockets. Generally, we don't hold tool manufacturers liable when someone uses their tools to break the law (e.g., Microsoft Windows, Amazon Web Services, a gun). So you'd need to find a legal theory that allowed imputing liability onto the AI company that provided an AI tool to the actual service provider. That may be possible but is not obvious in many cases.