Alongside my role at CRS, I am co-organising Sentient Futures Summit London 2026, which will be Friday 22nd to Sunday 24th May (the weekend before EA Global London).
My career goal is to prevent, reduce and alleviate the most intense forms of suffering, with a focus on the intersection of powerful AI and sentient nonhumans (biological and artificial).
Interested in:
* Sentience- & suffering-focused ethics; sentientism; painism; s-risks
* Animal ethics & abolitionism
* AI safety & governance
* Activism, direct action & social change
Bio:
* From London
* BA in linguistics at the University of Cambridge, 2014-17
* Almost five years in the British Army as an officer, 2018-22
* MSc in global governance and ethics at University College London, 2022-23
* One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising, 2023-24
* Lead organiser of AI, Animals, & Digital Minds 2025 in London
* Now working part-time on fundraising and external relationships at CRS, and part-time co-organising Sentient Futures Summit London 2026
If you can help fill CRS's funding gap for 2026 (between $25k and $125k) – by donating or putting me in touch with donors.
I think that to some extent you're proposing smashing the "defect" button in a prisoner's dilemma and hoping the other side doesn't do the same.
I've been pondering this. I think your button-smashing characterisation is basically accurate, and it is a leap of faith that those who engage in civil disobedience make: an appeal to the conscience of society, the jury etc..
You're right to say that one way to think about universalisability is "if it's okay for me to break the law to achieve what I consider to be a moral goal here, why can't everyone break the law to achieve their own moral goals?". But another way to think about universalisability is to go "if I were the one in Ridglan / Unit 731 / Willowbrook, what actions would I support to end my suffering?"
I don't know whether it would be illegal for parents to break their children out of Willowbrook, but for the purposes of this question assume it was.
What if "protecting innocent sentient beings from torture" is a higher moral priority than "living together in a society with people of greatly differing moral views"?
I'm sceptical that the distinction between flawed democracy and dictatorship is clean enough to justify civil disobedience on behalf of others only in the latter (if this is what you're saying). Would you support rescuing American children from deliberate infection with hepatitis at Willowbrook in the 1960s?
The clearest such historical cases are ones where a disenfranchised group of people broke laws that directly enforced their own exclusion from political participation or basic legal personhood. These cases are self-limiting (and thus pass reasonable tests of universalizability) since the principles justifying such lawbreaking achieve their own obsolescence once participation is granted.
I worry this approach excludes the most vulnerable (those who cannot meaningfully participate in political life, like human babies and animals), and focuses on less fundamental rights: I think protection from torture is more urgent than legal personhood.
Why would women be justified in engaging in civil disobedience to get the vote for themselves, but not be justified in engaging in civil disobedience to rescue babies from Josef Mengele?
I guess the causal mechanism I'm thinking of here is:
Maybe this is foolish and naive on my part! And maybe I'm wrong to think our moral preferences/intuitions will be so robust to the disruption of AGI, even if AGI goes well for us.
Some really cool points here Lee, and I mostly agree with you I think.
Crux: how many actors have terminal preferences for suffering? agency may be amplified for animal advocates, but it could also be amplified for malevolent actors.
This could be very important. I'm not sure what it means for AGI to go well for humans if some of those humans have terminal preferences for suffering / are sadistic. If the AGI protects the rest of us from the sadists, is AGI going well for the sadists?
EDIT: as well as sadists, we can consider humans who think animal agriculture, testing etc. has enough aesthetic/historical/cultural value that it's worth continuing to do it in a post-AGI world of abundance.
I need to think about b) more. I see arguments in both directions.
I don't think I can properly imagine what it's like to be tortured or eaten alive, and yet the thought of each happening to me or someone else makes me feel some combination of horror, fear, upset and compassion. And the idea of suffering more intense than torture or being eaten alive (if future artificially sentient beings have wider welfare ranges than we do) is terrifying to me.
But if I could never suffer worse than a pinprick, maybe I would stop caring about the most intense forms of suffering. Concerning stuff.
What kinds of values will humans have post-AGI, if AGI goes well for us? We don't need to be scope-sensitive utilitarians to want to adopt even radical preferences like ending animal exploitation and solving WAS, no? (Most humans don't like factory farming or the idea of cute animals being eaten alive.)
Really love this Lizka!