I thought I was giving the strong version. I have never heard an account of a warning shot theory of change that wasn’t “AI will cause a small-scale disaster and then the political will to do something will materialize”. I think the strong version would be my version, educating people first so they can understand small-scale disasters that may occur for what they are. I have never seen or heard this advocated in AI Safety circles before.
And I described how impactful chatGPT was on me, which imo was a warning shot gone right in my case.
Agree with your read of the situation, and I wish that the solution could be for EA to actually be cause neutral… but if that’s not on offer then I agree the intro material should be more upfront about that.
I thought I was giving the strong version. I have never heard an account of a warning shot theory of change that wasn’t “AI will cause a small-scale disaster and then the political will to do something will materialize”. I think the strong version would be my version, educating people first so they can understand small-scale disasters that may occur for what they are. I have never seen or heard this advocated in AI Safety circles before.
And I described how impactful chatGPT was on me, which imo was a warning shot gone right in my case.