This is a linkpost for https://digitalminds.report/forecasting-2025/
We surveyed experts about the future of digital minds — computer systems capable of subjective experiences.
The survey explored a wide range of difficult and speculative questions: when digital minds will first be created, how quickly their collective welfare capacity will expand, what types of digital minds will emerge, where and by whom digital minds will be built, whether their net welfare will be positive or negative, what claims they will make about consciousness and rights, how society will respond—through protections, recognition, or resistance—and what the implications will be for AI safety.
The report begins with a concise summary of the main findings.
This seems really important and under-discussed:
The brand new episode with Kyle Fish from Anthropic (released since you wrote this comment) discusses some reasons why AI Safety and AI welfare efforts might conflict or be mutually beneficial, if you're interested!
Thanks, I’ll have a listen!