Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate
Interested in:
Bio:
I'm now looking for opportunities in AI governance – specifically in generalist / programme manager / operations roles.
I can help with
1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)
2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)
I'm very pleased more thinking is being done on this – thank you.
I'm not sure I follow this:
Pushing for “animal-friendly” values may be harmful if it skews trajectories that are good for animals
- As an intuition pump, imagine that animal farming (or other functional-animal mistreatment by humans) will be eradicated by default (e.g. because it will stop being economically valuable). If we manage to instill strong animal-related concerns that are not perfectly “wise” (e.g. specific ~beliefs on what is good or bad for farmed animals), then the AI(s) may perpetuate farming in some form even if that choice is unnecessary and harmful.
Would this be an example: we instill a goal in a powerful AI system along the lines of "reduce the suffering of animals who are being farmed". Then the AI system prevents the abolition of animal farming on the grounds that it can't achieve that goal if animal farming ends?
A moving and disturbing book. The "fragments of corpses" excerpt continues with Elizabeth saying to her (non-vegetarian/vegan) son:
"It is as if I were to visit friends, and to make some polite remark about the lamp in their living room, and they were to say, 'Yes, it's nice, isn't it? Polish-Jewish skin it's made of, we find that's best, the skins of young Polish-Jewish virgins.' And then I go to the bathroom and the soap-wrapper says, 'Treblinka––100% human stearate.' Am I dreaming, I say to myself? What kind of house is this?
"Yet I'm not dreaming. I look into your eyes, into Norma's, into the children's, and I see only kindness, human-kindness. Calm down, I tell myself, you are making a mountain out of a molehill. This is life. Everyone else comes to terms with it, why can't you? Why can't you?"
She turns on him a tearful face. What does she want, he thinks? Does she want me to answer her question for her?
They are not yet on the expressway. He pulls the car over, switches off the engine, takes his mother in his arms. He inhales the smell of cold cream, of old flesh. "There, there," he whispers in her ear. "There, there. It will soon be over."
We should probably be more painist:
[painism is…] the theory that moral value is based upon the individual’s experience of pain (defined broadly to cover all types of suffering whether cognitive, emotional, or sensory), that pain is the only evil, and that the main moral objective is to reduce the pain of others, particularly that of the most affected victim, the maximum sufferer. (Ryder 2010, p. 402)
Indeed. I'm personally sympathetic to this kind of view (my ethics are heavily suffering-focused), but we wanted to make this piece pluralistic, and specifically able to accommodate the intuitions of those who think extinction of (one or more species of) wild animals would be very bad.