I read this post, where a tentative implication of recent AI advacements was:
"AI risk is no longer a future thing, it’s a ‘maybe I and everyone I love will die pretty damn soon’ thing. Working to prevent existential catastrophe from AI is no longer a philosophical discussion and requires not an ounce of goodwill toward humanity, it requires only a sense of self-preservation."
Do you believe that or something similar? Are you living as if you believe that? What does living that life look like?
What would it even look like to truly live in accordance with this? I try to make altruistic decision with this lens (of a close to AGI world). Outside of altruism, I need to preserve my own sanity and just exist day by day!