Next week for The 80,000 Hours Podcast I'll be interviewing Nova Das Sarma.
She works to improve computer and information security at Anthropic, a recently founded AI safety and research company.
She's also helping to find ways to provide more compute for AI alignment work in general.
Here's an (outdated) LinkedIn and in-progress personal website, and an old EA Forum post from Claire Zabel and Luke Muehlhauser about the potential EA relevance of information security.
What should I ask her?
The obvious way to reduce infosec risk is to beef up security. Another is to disincentivise actors from attacking in the first place. Are there any good ways of doing that (other than maybe criminal justice)?