As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)
I can imagine some answers:
- Very intractable
- Alignment is more immediately the core challenge, and widening the focus isn't useful
- Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)
- Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out
But it also seems important and action-relevant:
- Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more important
- Naively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus area
- It's an example of an area that won't necessarily attract resources / attention from commercial sources
(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)
Thanks for drawing more attention to this.
I wouldn't be surprised if this is part of the explanation, actually. Shifting the Overton window is a delicate art - imagine Leonardo DiCaprio shouting, "EVERYONE WILL DIE!! Also, THE METEOR COULD BE SENTIENT SO WE NEED TO LOOK AFTER IT." Not a chance. We might get somewhere with just the first part though, at least for now.
Unfortunately I think another piece of the puzzle is that the LessWrong crowd are the ones leading the conversation and they seem to care a lot less about nonhumans than EAs tend to. (Not totally sure on this - would be interested to hear others' impressions.)