I have written a new GovAI blog post - link here.
How should labs should share large AI models? I argue for a "structured access" approach, where outsiders interact with the model at arm's length. The aim is to both (a) prevent misuse, and (b) enable safety-relevant research on the model. The GPT-3 API is a good early example, but I think we can go even further. This could be a promising direction for AI governance.
I would be interested to hear people's thoughts :)
What incentives and mechanisms do you think would be most effective at getting industrial and academic labs to provide structured access to their models?
(1) seems worth funding to the extent that it's fund-able (like if it were an open source software project)
I'm less optimistic about public advocacy. As ML models have had a greater impact on peoples lives, there's already been more of a public movement looking for more transparency and accountability for these models (which could include structured access). It seems like this isn't a very strong incentive to existing companies' products.
(5) I like a lot, and would fit well with structured evaluation programmes, like BIG-Bench