I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
Hypothetically, yes, if we take it into a vacuum. I find the scenario unrealistic in any real world circumstance though because obviously people's happiness tends to depend on having other people around, and also, because any trajectory that ended in there being only one person, happy or not, from the current situation, seems likely bad.
Pretty much the same reasoning applies. For "everyone has it equally good/bad" worlds, I don't think sheer numbers make a difference. What makes things more complicated is when inequality is involved.
I think length of life matters a lot; if I know I'll have just one year of life, my happiness is kind of tainted by the knowledge of imminent death, you know? We experience all of our life ourselves. For an edge scenario, there's one character in "Permutation City" (a Greg Egan novel) who is a mind upload and puts themselves into a sort of mental state loop; after a period T their mental state maps exactly to itself and repeats identically, forever. If you considered such a fantastical scenario, then I'd argue the precise length of the loop doesn't matter much.