Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.
— The Centre for Effective Altruism
I think I do see "all people count equally" as a foundational EA belief. This might be partly because I understand "count" differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were "core" to EA, rather than idiosyncratic to me).
What I understand by "people count equally" is something like "1 person's wellbeing is not more important than another's".
E.g. a British nationalist might not think that all people count equally, because they think their copatriots' wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
"most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus"
In all of these situations, I think we can still say people "count" equally. QALY frameworks don't say that young people's wellbeing matters more - just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people - it's not that rich people's wellbeing matters less, it's just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement - I wouldn't be inclined to prioritize Happiness Georg's happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.