This is a special post for quick takes by Jamie B. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Do I need a technical background to work on AI Governance? I think no, not really. Quick take because I don't justify many of my claims.
Context: I haver been a technical ML engineer and (briefly) a researcher, and I'm now trying to work on AI governance (and spending a lot of time speaking to people who do work on AI governance).
Examples of things that are useful to understand to do AI governance: 1. Knowing about the train, test, deploy cycle at industrial AI companies. 2. 1. Knowing the psyche of ML engineers at those orgs. 3. Knowing which media channels machine learning engineers & researchers use to stay on top of news, including twitter & ML companies.
You don't get any of those insights by doing an ML coursera course. It might be fun / gratifying to do that course for other reasons, but I think it won't make you better at governance. It's better to have a few friends who are ML engineers and to get them to sketch out what it's like at a lab, some day (or - more costly but more thorough - to take a role at a lab, technical or nontechnical).
What I do think you need to engage with technically is not to be afraid to read below the surface of techincal memes - but I think not much below the surface.
Concrete example: watermarking.
It's enough for policymakers to be able to read a few watermarking papers and understand: a) watermarking is a way of tagging your model's outputs to prove it was produced by AI b) There are no tried & tested, reliable watermarking methods at the moment.
Where I see nontechnincal folk fall down (less so in this community) is when they throw out the term 'watermarking' but couldn't tell you about what methods can be used or what the reliability of those methods is. I think that can be read about, and you don't need to have direct experience having tried to watermarking something (I certainly haven't).
Do I need a technical background to work on AI Governance? I think no, not really. Quick take because I don't justify many of my claims.
Context: I haver been a technical ML engineer and (briefly) a researcher, and I'm now trying to work on AI governance (and spending a lot of time speaking to people who do work on AI governance).
Examples of things that are useful to understand to do AI governance:
1. Knowing about the train, test, deploy cycle at industrial AI companies.
2. 1. Knowing the psyche of ML engineers at those orgs.
3. Knowing which media channels machine learning engineers & researchers use to stay on top of news, including twitter & ML companies.
You don't get any of those insights by doing an ML coursera course. It might be fun / gratifying to do that course for other reasons, but I think it won't make you better at governance. It's better to have a few friends who are ML engineers and to get them to sketch out what it's like at a lab, some day (or - more costly but more thorough - to take a role at a lab, technical or nontechnical).
What I do think you need to engage with technically is not to be afraid to read below the surface of techincal memes - but I think not much below the surface.
Concrete example: watermarking.
It's enough for policymakers to be able to read a few watermarking papers and understand:
a) watermarking is a way of tagging your model's outputs to prove it was produced by AI
b) There are no tried & tested, reliable watermarking methods at the moment.
Where I see nontechnincal folk fall down (less so in this community) is when they throw out the term 'watermarking' but couldn't tell you about what methods can be used or what the reliability of those methods is. I think that can be read about, and you don't need to have direct experience having tried to watermarking something (I certainly haven't).