In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products.
My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior.
A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom.
Besides helping the affected person, this would hopefully:
- Signal to organizations that giving users access to AI is risky business
- Scare away new players in the market
- Scare away investors
- Give the AI company in question a bad rep, and sway the public opinion against AI companies in general
- Limit the ventures large organizations would be willing to jump into
- Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits)
All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.
I'm skeptical that this would be cost-effective. Section 230 aside, it is incredibly expensive to litigate in the US. Even if you found a somewhat viable claim (which I'm not sure you would), you would be litigating opposite a company like Microsoft. It would most likely cost $ millions to find a good case and pursue it, and then it would be settled quietly. Legally speaking, you probably couldn't be forced to settle (though in some cases you could); practically speaking, it would be very hard if not impossible to pursue a case through trial, and you'd need a willing plaintiff. Settlement agreements often contain confidentiality clauses that would constrain the signaling value of your suit. Judgments would almost certainly be for money damages, not any type of injunctive relief.
All the big tech players have weathered high-profile, billion-dollar lawsuits. It is possible that you could scare some small AI startups with this strategy, but I'm not sure if the juice is worth the squeeze. Best case scenario, some companies might pivot away from mass market and towards a b2b model. I don't know if this would be good or bad for AI safety.
If you want to keep working on this, you might look to Legal Impact for Chickens as a model for EA impact litigation. Their situation is a bit different though, for reasons I can expand on later if I have time.
Yes. The definition of "unauthorized practice of law" is murkier and depends more on context than one might think. For instance, I personally used -- and recommend for most people without complex needs -- the Nolo/Quicken WillMaker will-writing software.
On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on "How to Sue a Chatbot Company For Harm X, Including Sample Pleadings" is probably not going to constitute unauthorized practice.