As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)
I can imagine some answers:
- Very intractable
- Alignment is more immediately the core challenge, and widening the focus isn't useful
- Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)
- Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out
But it also seems important and action-relevant:
- Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more important
- Naively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus area
- It's an example of an area that won't necessarily attract resources / attention from commercial sources
(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)
Compared to whatever!
The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it -- rhymes quite nicely with the case for work on AI safety.
Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:
But I'm guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)