It's not "perilously close", because it's very different from incitement to violence. I have explained that incitement to violence requires a call for violence which is time-scoped to the near future; Sherman's statement did not include a call for violence at all. You are correct that he bears no moral responsibility for the actions of people who heard his statements.
No, actually; the legal standard used in courts is what was actually said, which needs to include a clear call for violence to be carried out at some point in the near future. It's extremely frustrating to me that you're misusing legal terms to lend your arguments weight they don't hold; please cease to do so. https://en.wikipedia.org/wiki/Brandenburg_v._Ohio contains plenty of helpful information if you'd like to learn more about what "incitement to violence" means in America.
Holly herself believes standards of criticism should be higher than what (judging by the comments here without being familiar with the overall situation) she seems to have employed here; see Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent.
I was extremely disappointed to see this tweet from Liron Shapira revealing that the Centre for AI Safety fired a recent hire, John Sherman, for stating that members of the public would attempt to destroy AI labs if they understood the magnitude of AI risk. Capitulating to this sort of pressure campaign is not the right path for EA, which should have a focus on seeking the truth rather than playing along with social-status games, and is not even the right path for PR (it makes you look like you think the campaigners have valid points, which in this case is not true). This makes me think less of CAIS' decision-makers.
I mean, presumably they meant he hadn't actually harmed someone in real life yet.