As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)
I can imagine some answers:
- Very intractable
- Alignment is more immediately the core challenge, and widening the focus isn't useful
- Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)
- Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out
But it also seems important and action-relevant:
- Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more important
- Naively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus area
- It's an example of an area that won't necessarily attract resources / attention from commercial sources
(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)
The impression I get is that lots of people are like “yeah, I’d like to see more work on this & this could be very important” but there aren’t that many people who want to work on this & have ideas.
Is there evidence that funding isn’t available for this work? My loose impression is that mainstream funders would be interested in this. I suppose it’s an area where it’s especially hard to evaluate the promisingness of a proposal, though.
Reasons people might not be interested in doing this work: — Tractability — Poor feedback loops — Not many others in the community to get feedback from — Has to deal with thorny and hard-to-concretize theoretical questions
Reasons people might want to work on this: — Importance and neglectedness — Seems plausible that one could become one of the most knowledgeable EAs on this topic in not much time — Interdisciplinary; might involve interacting a lot with the non-EA world, academia, etc — Intellectually stimulating
See also: https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/
https://arxiv.org/abs/2303.07103