JK

Joshua Krook

Law Researcher
40 karmaJoined

Comments
2

I definitely think you're right that vague prompts will be the main scenario. It's possible that the same prompt could have a legal and nonlegal answer if written too vaguely or without specifying certain boundaries or methods. I've seen some research on misalignment that suggests this too.

The other situation is users who do in fact intend a crime. Increasingly, they will have to jailbreak the AI, because the AI companies will put in more safeguards over time.

So dealing with both vague prompting and jailbreaks could help mitigate some of this.

In terms of examples, I gave the example of insider trading because I think it's the easiest to imagine. If you get an agent to trade on the market and it does so via gaining data from a company server, then that alone might be enough. Various white collar crimes are quite small in terms of what you actually need to do to commit them.

Fraud is another example that we are seeing already. Because AI hallucinates and makes things up, it's kind of already vulnerable to fraud scenarios, and in particular misrepresenting the user it works for. So an AI could say their user is a professional (X) when they're just a student, or something similar. In many jurisdictions claiming a licence you don't have is already a crime.