I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.
As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.
Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.
Thanks for the thought our response! I suppose the main difference is that we have very diverging ideas of what the EA community is and what it will/should become.
I’ve been on the fringe of EA for years, just learning about concepts and donating but never been part of the tighter group so to speak. I see EA as a question - how do we do the most good with the resources available?
Poly is definitely something historically related to the early movement, but I guess I just disagree that the trade off of reputation and attacks over sexual harassment issues etc are positive because of vague notions of fun.
Also - if the EA community creates massive burnout maybe we should change the way we approach our communications and epistemics instead of accepting that and saying we’ll cope by having casual sex. That doesn’t seem like a good road to go down especially long term.
Then again I don’t have short AI timelines.
Yeah these criticisms are fair, my comment was made hastily and in poor taste. I've deleted it.