As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)
I can imagine some answers:
- Very intractable
- Alignment is more immediately the core challenge, and widening the focus isn't useful
- Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)
- Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out
But it also seems important and action-relevant:
- Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more important
- Naively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus area
- It's an example of an area that won't necessarily attract resources / attention from commercial sources
(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)
My attitude, and the attitude of many of the alignment researchers I know, is that this problem seems really important and neglected, but we overall don't want to stop working on alignment in order to work on this. If I spotted an opportunity for research on this that looked really surprisingly good (e.g. if I thought I'd be 10x my usual productivity when working on it, for some reason), I'd probably take it.
It's plausible that I should spend a weekend sometime trying to really seriously consider what research opportunities are available in this space.
My guess is that a lot of the skills involved in doing a good job of this research are the same as the skills involved in doing good alignment research.