Imagine GPT-8 arrives in 2026, creates a fast takeoff and AGI kills us all.
Now imagine the AI Safety work we did in the meantime (technical and non-technical) progressed in a linear fashion (i.e. in resources committed), but it obviously wasn't enough to prevent us all being killed.
What were the biggest, most obvious mistakes we made as (1) individuals and (2) as a community?
For example, I spend some of my time working on AI Safety but it is ~ 10%. In this world I probably should have committed my life to it. Maybe I should have considered more public displays of protest?
I like this take: if AI is dangerous enough to kill us in three years, no feasible amount of additional interpretability research would save us.
Our efforts should instead go to limiting the amount of damage that initial AIs could do. That might involve work securing dangerous human-controlled technologies. It might involve creating clever honey pots to catch unsophisticated-but-dangerous AIs before they can fully get their act together. It might involve lobbying for processes or infrastructure to quickly shut down Azure or AWS.