CP

Chris Percy

Researcher
29 karmaJoined Working (15+ years)London, UK
cf-debate.com

Bio

Chris Percy PhD is the lead researcher on a project team grant to research artificial consciousness (April 2025 – April 2026). The grant covers: mapping/improving resources to support decision making around artificial consciousness; investigating (even more) neglected topics such as “analogue” system consciousness; & exploring links with x-risk game theory & alignment. Chris's career background is in applied ML/XAI and policy, primarily in education and public health. He is a voting member at the ASSC, a patent holder in applied ML, and holds visiting research affiliations at the Universities of Warwick and Derby (UK) and QRI (US). As a consulting researcher, Chris has worked with lots of organisations, from international agencies (World Bank, ILO, OECD, ...) and charities (Cancer Research UK, Royal Society, Wellcome Trust, ...), as well as governments in eight nations and a fun range of start-ups and corporates.

Comments
4

Such important work - really pleased to see this initiative moving forward 

Great retrospective and a great event! It was a joy to speak there and to learn so much from fellow attendees. I hope to get involved again (although can't get to NY for October, so it would be a later one).

Agreed - lots to aspire towards, even if speculative/challenging

This is tricky stuff, thank you for getting into the conversation. The part that resonates most with me would run something like this: I can imagine a conscious AI that personally experiences valence is more likely to adopt ethical positions that suffering is bad (irrespective of moral relativism), although it might be more likely to rank its own / its kin's suffering above others' (as pretty much all animals do), which could make things worse for us than a neutral optimiser. Assuming AI is powerful enough that it can manage its own suffering to minimal levels and still have considerable remaining resources*, there is a chance it would then devote some resource to others' valence, albeit perhaps only as part of a portfolio of objectives. 
* Itself a major assumption even under ASI, e.g. if suffering is relative or fairly primitive in consciousness architecture so that most all functions end up involving mixed valence states.

I can see a risk in the other direction, where the AI does suffer and that suffering is somehow intrinsic to its operations (or deemed unbearable/unavoidable in some fashion, e.g. perhaps negative utilitarianism or certain Buddhist positions on 'all life is suffering' are correct for it). If so, a suffering AI might not only want to extinct itself but also to extinct those that gave rise to it, whether as punishment, to ensure it is never re-created, or as a gift to reduce what it sees as our own 'inevitable suffering'. 

Overall, feels very hard to assess, but worth thinking about even if not the highest priority alignment topic. Both aspects tie issues around AI consciousness/sentience into AI safety more directly than most of the safety movement acknowledges...