Ops generalist at Anthropic.
On his recent interview with FLI, Andrew Critch talks about overlaps between AI safety and current issues, and the difference between AI safety and existential safety/risk. Many (but not all) AI safety issues are relevant to current systems, so people who care about x-risks could focus on safety issues that are novel to advanced systems.
If you take a random excerpt of any page from [Aligning Superintelligence with Human Interests] and pretend that it’s about the Netflix challenge or building really good personal assistants or domestic robots, you can succeed. That’s not a critique. That’s just a good property of integrating with research trends. But it’s not about the concept of existential risk. Same thing with Concrete Problems in AI Safety.
In fact, it’s a fun exercise to do. Take that paper. Pretend you think existential risk is ridiculous and read Concrete Problems in AI Safety. It reads perfectly as you don’t need to think about that crazy stuff, let’s talk about tipping over vases or whatever. And that’s a sign that it’s an approach to safety that it’s going to be agreeable to people, whether they care about x-risk or not...
...So here’s a problem we have. And when I say we, I mean people who care about AI existential safety. Around 2015 and 2016, we had this coming out of AI safety as a concept. Thanks to Amodei and the Robust and Beneficial AI Agenda from Stuart Russell, talking about safety became normal. Which was hard to accomplish before 2018. That was a huge accomplishment.
And so what we had happen is people who cared about extinction risk from artificial intelligence would use AI safety as a euphemism for preventing human extinction risk. Now, I’m not sure that was a mistake, because as I said, prior to 2018, it was hard to talk about negative outcomes at all. But it’s at this time in 2020 a real problem that you have people … When they’re thinking existential safety, they’re saying safety, they’re saying AI safety. And that leads to sentences like, “Well, self driving car navigation is not really AI safety.” I’ve heard that uttered many times by different people.
Lucas Perry: And that’s really confusing.
Andrew Critch: Right. And it’s like, “Well, what is AI safety, exactly, if cars driven by AI, not crashing, doesn’t count as AI safety?” I think that as described, the concept of safety usually means minimizing acute risks. Acute meaning in space and time. Like there’s a thing that happens in a place that causes a bad thing. And you’re trying to stop that. And the Concrete Problems in AI Safety agenda really nailed that concept.
A few other resources about bridging the long-term and near-term divide:
Thanks for writing this post! It's cool to see people thinking about less direct, but potentially more neglected and tractable paths to affecting influential governments.
Do you have thoughts on the difference between intentional and unintentional diffusion?
Also, getting people to pursue this path might be challenging because of things like status effects and people preferring to live in EA hubs.
Claire Yip's estimates (and the response from GFI) was informative for me: https://forum.effectivealtruism.org/posts/4uYebcr5G2jqxuXG3/when-can-i-eat-meat-again
Thanks for doing this!
People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.
Could you say more about the different skills and traits relevant to research project management?
yup! I tried to make this point in the section on trajectory: "Hypersonic missiles fly lower than ballistic missiles, which delays detection time by ground-based radar.". I'm trying to include the following photo to illustrate the point, but I can't seem to figure out how ):
There's Mines Advisory Group: https://www.maginternational.org/what-we-do/clear-landmines-clusterbombs/. I'm not sure how effective they are, or how they compare to HALO Trust.
Anthropic is hiring for 10+ roles, including several operations roles: biz ops, executive assistant, ops generalist, and a recruiting coordinator.