This is tricky stuff, thank you for getting into the conversation. The part that resonates most with me would run something like this: I can imagine a conscious AI that personally experiences valence is more likely to adopt ethical positions that suffering is bad (irrespective of moral relativism), although it might be more likely to rank its own / its kin's suffering above others' (as pretty much all animals do), which could make things worse for us than a neutral optimiser. Assuming AI is powerful enough that it can manage its own suffering to minimal levels and still have considerable remaining resources*, there is a chance it would then devote some resource to others' valence, albeit perhaps only as part of a portfolio of objectives.
* Itself a major assumption even under ASI, e.g. if suffering is relative or fairly primitive in consciousness architecture so that most all functions end up involving mixed valence states.
I can see a risk in the other direction, where the AI does suffer and that suffering is somehow intrinsic to its operations (or deemed unbearable/unavoidable in some fashion, e.g. perhaps negative utilitarianism or certain Buddhist positions on 'all life is suffering' are correct for it). If so, a suffering AI might not only want to extinct itself but also to extinct those that gave rise to it, whether as punishment, to ensure it is never re-created, or as a gift to reduce what it sees as our own 'inevitable suffering'.
Overall, feels very hard to assess, but worth thinking about even if not the highest priority alignment topic. Both aspects tie issues around AI consciousness/sentience into AI safety more directly than most of the safety movement acknowledges...
Great retrospective and a great event! It was a joy to speak there and to learn so much from fellow attendees. I hope to get involved again (although can't get to NY for October, so it would be a later one).