Bio

Participation
3

Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate

Interested in:

  • Sentience- & suffering-focused ethics; sentientism; painism; s-risks
  • Animal ethics & abolitionism
  • AI safety & governance
  • Activism, direct action & social change

Bio:

  • From London
  • BA in linguistics at the University of Cambridge
  • Almost five years in the British Army as an officer
  • MSc in global governance and ethics at University College London
  • One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising
  • Now pivoting to the (future) impact of AI on biologically and artifically sentient beings
  • Was lead organiser of the AI, Animals, & Digital Minds in London in May/June 2025

How others can help me

I'm now looking for opportunities in AI governance – specifically in generalist / programme manager / operations roles.

How I can help others

I can help with

1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)

2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)

Comments
32

Indeed. I'm personally sympathetic to this kind of view (my ethics are heavily suffering-focused), but we wanted to make this piece pluralistic, and specifically able to accommodate the intuitions of those who think extinction of (one or more species of) wild animals would be very bad.

Thank you! And thanks for all your contributions over the weekend 🤝 

Yes, my example and the paperclip one both seem like a classic case of outer misalignment / reward misspecification.

I'm very pleased more thinking is being done on this – thank you.

I'm not sure I follow this:

Pushing for “animal-friendly” values may be harmful if it skews trajectories that are good for animals

  • As an intuition pump, imagine that animal farming (or other functional-animal mistreatment by humans) will be eradicated by default (e.g. because it will stop being economically valuable). If we manage to instill strong animal-related concerns that are not perfectly “wise” (e.g. specific ~beliefs on what is good or bad for farmed animals), then the AI(s) may perpetuate farming in some form even if that choice is unnecessary and harmful.

Would this be an example: we instill a goal in a powerful AI system along the lines of "reduce the suffering of animals who are being farmed". Then the AI system prevents the abolition of animal farming on the grounds that it can't achieve that goal if animal farming ends?

A moving and disturbing book. The "fragments of corpses" excerpt continues with Elizabeth saying to her (non-vegetarian/vegan) son:

"It is as if I were to visit friends, and to make some polite remark about the lamp in their living room, and they were to say, 'Yes, it's nice, isn't it? Polish-Jewish skin it's made of, we find that's best, the skins of young Polish-Jewish virgins.' And then I go to the bathroom and the soap-wrapper says, 'Treblinka––100% human stearate.' Am I dreaming, I say to myself? What kind of house is this?

"Yet I'm not dreaming. I look into your eyes, into Norma's, into the children's, and I see only kindness, human-kindness. Calm down, I tell myself, you are making a mountain out of a molehill. This is life. Everyone else comes to terms with it, why can't you? Why can't you?"

She turns on him a tearful face. What does she want, he thinks? Does she want me to answer her question for her?

They are not yet on the expressway. He pulls the car over, switches off the engine, takes his mother in his arms. He inhales the smell of cold cream, of old flesh. "There, there," he whispers in her ear. "There, there. It will soon be over."

We should probably be more painist:

[painism is…] the theory that moral value is based upon the individual’s experience of pain (defined broadly to cover all types of suffering whether cognitive, emotional, or sensory), that pain is the only evil, and that the main moral objective is to reduce the pain of others, particularly that of the most affected victim, the maximum sufferer. (Ryder 2010, p. 402)

  • I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
  • However, couching demands for an AGI moratorium in terms of "reducing x-risk" rather than "reducing suffering" seems
    • More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
    • More effective in communicating catastrophic AI risk to the public
Alistair Stewart
1
1
1
93% disagree

Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.

Load more