Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
There's a decent amount of French-speaking ~AI safety content on YouTube:
One of the killings was, as far as we know, purely mimetic and (allegedly) made by someone (@Maximilian Snyder) who never even interacted online with Ziz, so I don't think it's an invalid example to bring up actually.
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying "I'm working on going to mars, it's the most important project in the world" and Demis argues "actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars". (This is in the context of Thiel's long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that "there's nowhere else to go" to escape mainstream culture/civilization, that you can't escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis' mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk's creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.
While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.