A

ArthurF

Student (Msci - physics) @ Durham University
2 karmaJoined

Comments
1

Not sure of this at all but I’m under the impression commercial use tends to find loopholes for malicious use of AI systems that regulators/safety teams are unable to predict. 

Could we get a parallel with misaligned behaviour, where we start to notice misalignment with high volume of use in ways not rigorously testable or present under regulator testing? Would it be useful to consider a commercial incubation period with certain compute limits for example, or would we be able to definitely trust regulators?