In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.
My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.
A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom.
Besides helping the affected person, this would hopefully:
- Signal to organizations that giving users access to AI is risky business
- Scare away new players in the market
- Scare away investors
- Give the AI company in question a bad rep, and sway the public opinion against AI companies in general
- Limit the ventures large organizations would be willing to jump into
- Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)
All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.
bob - I think this is a brilliant idea, and it could be quite effective in slowing down reckless AI development.
For this to be effective, it would require working with experienced lawyers who know relevant national and international laws and regulations (e.g. in US, UK, or EU) very well, who understand AI to some degree, and who are creative in seeing ways that new AI systems might inadvertently (or deliberately) violate those laws and regulations. They'd also need to be willing to sue powerful tech companies -- but these tech companies also have very deep pockets, so litigation could be very lucrative for law firms that have the guts to go after them.
For example, in the US, there are HIPAA privacy rules regarding companies accessing private medical information. Any AI system that allows or encourages users to share private medical information (such as asking questions about their symptoms, diseases, medications, or psychiatric issues when using a chatbot) is probably not going to be very well-designed to comply with these HIPAA regulations -- and violating HIPAA is a very serious legal issue.
More generally, any AI system that offers advice to users regarding medical, psychiatric, clinical psychology, legal, or financial matters might be in violation of laws that give various professional guilds a government-regulated monopoly on these services. For example, if a chatbot is basically practicing law without a license, practicing medicine without a license, practicing clinical psychology without a license, or giving financial advice without a license, then the company that created that chatbot might be violating some pretty serious laws. Moreover, the professional guilds have every incentive to protect their turf against AI intrusions that could result in mass unemployment among their guild members. And those guilds have plenty of legal experience suing interlopers who challenge their monopoly. The average small law firm might not be able to effectively challenge Microsoft's corporate legal team that would help defend OpenAI. But the American Medical Association might be ready and willing to challenge Microsoft.
AI companies would also have to be very careful not to violate laws and regulations regarding production of terrorist propaganda, adult pornography (illegal in many countries such as China, India, etc), child pornography (illegal in most countries), heresy (e.g. violating Sharia law in fundamentalist Muslim countries), etc. I doubt that most devs or managers at OpenAI or DeepMind are thinking very clearly or proactively about how not to fall afoul of state security laws in China, Sharia laws in Pakistan, or even EU privacy laws. But lawyers in each of those countries might realize that American tech companies are rich enough to be worth suing in their own national courts. How long will Microsoft or Google have the stomach for defending their AI subsidiaries in the courts of Beijing, Islamabad, or Brussels?
There are probably dozens of other legal angles for slowing down AI. Insofar as AI systems are getting more general purpose and more globally deployed, the number of ways they might violate laws and regulations across different nations is getting very large, and the legal 'attack surface' that makes AI companies vulnerable to litigation will get larger and larger.
Long story short, rather than focusing on trying to pass new global regulations to limit AI, there are probably thousands of ways that new AI systems will violate existing laws and regulations in different countries. Identifying those, and using them as leverage to slow down dangerous AI developments, might be a very fast, clever, and effective use of EA resources to reduce X risk.
Jason - thanks for these helpful corrections, clarifications, and extensions.
My comment was rather half-baked, and you've added a lot to think about!