Lloy2

Policy Research Masters Student @ University of Nottingham
22 karmaJoined Pursuing a graduate degree (e.g. Master's)Seeking workNottingham, UK

Bio

Participation
1

In catastrophic AI risk, extinction is only half the equation. The other half is ensuring a future that’s actually worth saving. I want to help ensure that a post-AGI world is a good one, not marred by moral disaster.

I will be doing my masters thesis on policy/governance addressing catastrophic AI risk. I'm currently hoping to focus on preventing AI from exacerbating or locking in totalitarianism, perhaps particularly fascism.

I've also been running my university's Buddhism and Meditation society for three years.

How others can help me

I am looking for paid part-time work in policy or AI safety, ideally something at the intersection!

How I can help others

My interdisciplinary degree spanned modules in policy, sociology, politics, sustainability, philosophy and theology. I can therefore offer undergrad-level skills in these subjects (especially sociology, which I majored in).

I can also create multimedia content or visual explainers.

Comments
6

Thought-provoking read, thanks for having shared it with me on my post!

I particularly appreciated your ideas around the tension between grounding AI morality in human-like experience—despite AI's lack of it—while recognising that both continuous change and susceptibility to bias complicate moral reasoning, raising questions about whether brief, strategically timed or partial experiences could instil robust moral understanding without distorting judgment in the way emotionally charged human experience often does.

I also found your reflections on engineering AI with predominantly positive qualia, the challenge to valence as the default moral reward signal, the idea that AI sentience might influence human ethical behaviour in return, and the call for a deeper moral foundation than utility maximisation to all be quite novel and helpful ways of seeing things.

Your reference to Agarwal and Edelman aligns well with David Pearce's idea of 'a motivational system based on heritable gradients of bliss', worth checking out if you're not familiar - I think it's a promising model for designing a sentient ASI.

I'm of course not aware of why you chose not to distinguish between consciousness and sentience, but I do find the distinction 80,000 Hours (and no doubt many other sources) makes between them to be useful.

This is fascinating! I really want to see more research like this.

Lloy2
1
0
0
70% disagree

The most objective thing about morality (especially utilitarianism) is that some experiential states are objectively 'better' than others by virtue of their valence and that therefore moral projects, however valid they themselves are, at least take root in something real.

I appreciate this! I don't feel though that the article addresses the possibility of democratising alignment (or, as Toner says, 'steerability').