"EA has always bee[n] rather demanding,"
I want to clarify that this is a common but generally incorrect reading of EA's views. EA leaders have repeatedly clarified that you don't need to dedicate your life to it, and can simply donate to causes that others have identified as highly effective, and otherwise live your life.
If you want to do more than that, great, good for you - but EA isn't utilitarianism, so please don't confuse the demandingness of the two.
First, Utilitarianism doesn't traditionally require the type of extreme species neutrality you propose here.Singer and many EAs gave a somewhat narrower view of what 'really counts' as Utilitarian, but your argument assumes that narrow view without really justifying it.
Second, you assume future AIs will have rich inner lives that are valuable, instead of paperclipping the universe. You say "one would need to provide concrete evidence about what kinds of objectives advanced AIs are actually expected to develop" - but Eliezer has done that quite explicitly.
I very much appreciate that you are thinking about this, and the writing is great. That said, without trying to address the arguments directly, I worry that the style here is justifying a conclusion you've come to and explores analogies you like rather than exploring the arguments and trying to decide what side to be on, and it fails to embrace scout mindset sufficiently to be helpful.
I think that replaceability is very high, so the counterfactual impact is minimal. But that said, there is very little possibility in my mind that even helping with RLHF for compliance with their "safety" guidelines is more beneficial for safety than for accelerating capabilities racing, so any impact is negative.
Close enough not to have any cyclic components that would lead to infinite cycles for the nonsatiable component of their utility.