In catastrophic AI risk, extinction is only half the equation. The other half is ensuring a future that’s actually worth saving. I want to help ensure that a post-AGI world is a good one, not marred by moral disaster.
I will be doing my masters thesis on policy/governance addressing catastrophic AI risk. I'm currently hoping to focus on preventing AI from exacerbating or locking in totalitarianism, perhaps particularly fascism.
I've also been running my university's Buddhism and Meditation society for three years.
I am looking for paid part-time work in policy or AI safety, ideally something at the intersection!
My interdisciplinary degree spanned modules in policy, sociology, politics, sustainability, philosophy and theology. I can therefore offer undergrad-level skills in these subjects (especially sociology, which I majored in).
I can also create multimedia content or visual explainers.
Thought-provoking read, thanks for having shared it with me on my post!
I particularly appreciated your ideas around the tension between grounding AI morality in human-like experience—despite AI's lack of it—while recognising that both continuous change and susceptibility to bias complicate moral reasoning, raising questions about whether brief, strategically timed or partial experiences could instil robust moral understanding without distorting judgment in the way emotionally charged human experience often does.
I also found your reflections on engineering AI with predominantly positive qualia, the challenge to valence as the default moral reward signal, the idea that AI sentience might influence human ethical behaviour in return, and the call for a deeper moral foundation than utility maximisation to all be quite novel and helpful ways of seeing things.
Your reference to Agarwal and Edelman aligns well with David Pearce's idea of 'a motivational system based on heritable gradients of bliss', worth checking out if you're not familiar - I think it's a promising model for designing a sentient ASI.
I'm of course not aware of why you chose not to distinguish between consciousness and sentience, but I do find the distinction 80,000 Hours (and no doubt many other sources) makes between them to be useful.