I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
Thanks for commenting, dr_s!
In the sense of increasing expected total hedonistic utility, where hedonistic utility can be thought of as positive conscious experiences. For example, if universes A and B are identical in every respect except I am tortured 1 h more in universe A than in universe B, then universe A is worse than universe B (for reasonable interpretations of "I am tortured"). I do not see how one can argue against the badness (morally negative value) of torture when everything else stays the same. If it was not wrong to add torture keeping everything else the same, then what would be wrong?
I would say it comes from the Laws of Physics, like everything else. While I am being tortured, the particles and fields in my body are such that I have a bad conscious experience.
I think this may depend on the timeframe you have in mind. For example, I agree human extinction in 2024 due to advanced AI would be bad (but super unlikely), because it would be better to have more than 1 year to think about how to deploy a super powerful system which may take control of the universe. However, I think there are scenarions further in the future where human disempowerment may be good. For example, if humans in 2100 determined they wanted to maintain forever the energy utilization of humans and AIs below 2100 levels, and never let humans nor AIs leave Earth, I would be happy for advanced AIs to cause human extinction (ideally in a painless way) in order to get access to more energy to power positive conscious experiences of digital minds.