Principally, I am a social philosopher engaged in conceptual engineering and macrostrategy within the wider Effective Altruism, AI alignment, Existential Risk (studies) and Progress (studies) space. In relation to this constellation, I represent a heterodox, interdisciplinary perspective emphasising the holistic alignment of civilisation. To further this endeavour, I am currently working on guiding the formation of a new Coordination (studies) movement: an updated conception of effective altruism for the age of AGI oriented towards addressing multi-polar dynamics and solving coordination problems between human, AI and institutional agents.
Hey there @Yulia. I've been developing a theory-building project (wiki) SourceCodeX - for the emerging space intersecting the metacrisis, critical political economy, AI alignment and effective altruism - and I've added an entry for Civilisational Sanity. Check it out, let me know what you think and if there's anything you'd like to see added - Max :)
P.S view on laptop as I'm yet to format the mobile version
P.S.S hope the animation loads and you think it looks cool! haha