Bio

Participation
2

Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate

Interested in:

  • Sentience- & suffering-focused ethics; sentientism; painism; s-risks
  • Animal ethics & abolitionism
  • AI safety & governance
  • Activism, direct action & social change

Bio:

  • From London
  • BA in linguistics at the University of Cambridge
  • Almost five years in the British Army as an officer
  • MSc in global governance and ethics at University College London
  • One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising
  • Now pivoting to the (future) impact of AI on biologically and artifically sentient beings
  • Currently lead organiser of the AI, Animals, & Digital Minds conference in London in June 2025

How others can help me

I'm looking to

1. Learn more about AI safety, alignment, governance, risk, ethics; and specifically how they relate to sentient non-humans

2. Connect with others working at the intersection of AI and animals, especially those who share similar sentience/suffering-focused values

3. Learn more about opportunities to have an impact at the intersection of AI and animals, including meeting with people who work at other organisations, and consider changing my career based on what I learn

How I can help others

I can help with

1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)

2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)

Posts
2

Sorted by New
1
· · 1m read

Comments
24

  • I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
  • However, couching demands for an AGI moratorium in terms of "reducing x-risk" rather than "reducing suffering" seems
    • More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
    • More effective in communicating catastrophic AI risk to the public
Alistair Stewart
1
1
1
93% disagree

Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.

To shrimps and other sentient non-humans, we are a misaligned superintelligence

Durrell added - I wish all those protesting to animals living in zoos and claiming animals lead far happier lives in the wild - I wish they all saw this!

I agree we shouldn't assume that animals lead far happier lives in the wild; but I don't think that means we should support zoos (which unlike sanctuaries exist for the benefit of humans rather than the benefit of the animals, and typically rely on breeding animals).

To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being

This implies preferences matter when they cause well-being (positively-valenced sentience).

I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.

I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).

In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.

...

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

This implies that what matters is intrinsic preferences as opposed to revealed preferences.

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

This (I think) is a circular argument.

I don't have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world.

This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn't describe newborn humans in these terms.

If it's coherent to talk about a particular mind "wanting" something, then I think it matters from an ethical point of view.

I think we're back to square 1, because what does "wanting something" mean? If you mean "having preferences for something", which preferences (revealed, intrinsic, meaningful)?

My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that's all that matters morally in the world.

To be clear, which preferences do you think are morally relevant/meaningful? I'm not seeing a consistent thread through these statements.

To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being

...

I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.

I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

...

In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.

...

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.

It's not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsic preferences?):

  1. I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
  2. I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.

...

From my preference utilitarian point of view, what matters is something more like meaningful preferences. Animals can have meaningful preferences, as can small children, even if they do not exhibit the type of complex agency that human adults do.

What's the difference between "revealed", "intrinsic" and "meaningful" preferences? The latter two seem substantially diffferent from the first.

Animals are unable to communicate with us in a way that allows for negotiation, trade, legal agreements, or meaningful participation in social institutions. Because of this, they cannot make credible commitments, integrate into our legal system, or assert their own interests. This lack of communication largely explains why humans collectively treat animals the way we do—exploiting them without serious consideration for their preferences.

I'm sceptical that animal exploitation is largely explained by a lack of communication. Humans have enslaved other humans with whom they could communicate and enter into agreements (North American slavery); humans have afforded rights/protection/care to humans with whom they can't communicate and enter into agreements (newborn infants, cognitively impaired adults); and I'd be surprised if solving interspecies communication gets us most of the way to the abolition of animal exploitation, though it's highly likely to help.

I think animal exploitation is better explained by a) our perception of a benefit ("it helps us") and b) our collective superior intelligence/power ("we can"), and it's underpinned by c) our post-hoc speciesist rationalisation of the relationship ("animals matter less because they're not our species"). It's not clear to me that us being able to speak to advanced AIs will mean that any of a), b) and c) won't apply in their dealings with us (or, indeed, in our dealings with them).

By clearly defining ownership and legal autonomy, property rights reduce conflict by ensuring that individuals and groups have recognized control over their own resources, rather than relying on brute force to assert their control. As this system has largely worked to keep the peace between humans—who can mutually communicate and coordinate with each other—I am relatively optimistic that it can also work for AIs. This helps explain why I favor integrating AIs into the same legal and economic systems that protect human property rights.

I remain deeply unpersuaded I'm afraid. GIven where we're at on interpretability and alignment vs capabilities, this just feels more like a gorilla or an ant imagining how their relationship with an approaching human is going to go. These are alien minds the AI companies are creating. But I've already said this, so I'm not sure how helpful it is – just my intuition.

staircase2

I subscribe to an eliminativist theory of consciousness

V interesting!

I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

Does this mean you consider e.g. corporations to have moral worth, because they demonstrate consistent revealed preferences (like a preference to maximise profit)?

I think there are strong pragmatic reasons to give AIs certain legal rights, even if they don't have moral worth. Specifically, I think granting AIs economic rights would reduce the incentives for AIs to deceive us, plot a violent takeover, or otherwise undermine human interests, while opening the door to positive-sum trade between humans and AIs.

New to me – thanks for sharing. I think I'm (much) more pessimistic than you on cooperation between us and advanced AI systems, mostly because of a) the ways in which many humans use and treat less powerful / collectively intelligent humans and other species and b) it seeming very unclear to me that AGI/ASI would necessarily be kinder.

At least insofar as we're talking about individual liberties, I think I'm willing to bite the bullet on this question. We already recognize various categories of humans as lacking the maturity or proper judgement to make certain choices for themselves. The most obvious category is children, who (in most jurisdictions) are legally barred from entering into valid legal contracts, owning property without restrictions, dropping out of school, or associate with others freely. In many jurisdictions, adult humans can also be deemed incapable of consenting to legal contracts, often through a court order.

These are good points and I now realise refer to negative (rather than positive) rights. I agree with you that we should restrict certain rights of less agentic/intelligent sentient individuals – like the negative rights you list above, plus some positive rights like the right to vote and drive. This doesn't feel like much of a bullet bite to me.

I continue to believe strongly that some negative rights like the right not to be exploited or hurt ought to be grounded solely in sentience, and not at all in intelligence or agency.

Traditionally, economic rights like the freedom to own property are seen as negative rights, not positive rights. The reason is because, in many contexts, economic rights are viewed as defenses against arbitrary interference from criminal or state actors (e.g., protection from crime, unjust expropriation, or unreasonable regulations).

Appreciate this – I didn't know this, makes sense!

Since these categories are often difficult to distinguish in practice, I preferred to sidestep this discussion in my post, and focused instead on a dichotomy which felt more relevant to the topic at hand.

I tend to think that negative vs positive rights remains a better framing than welfare vs rights, partly because I'm not aware of there being a historical precedent for using welfare vs rights in this way. At least in the animal movement this isn't what that dichotomy means – though perhaps one would see this dichotomy across movements rather than within a single movement. If you have reading on this please do share.

Load more