Consequentialism asserts that whether an action is morally right or wrong is determined by its outcomes. The commonly held perspective is that there are moral theories which distinguish themselves from consequentialism, examples of which include deontology, virtue ethics, and contractualism. However, a deeper analysis of the underlying motivations and principles of these theories suggests a different picture. If we were to trace the logical progression of any moral statement within any moral theory back to its origins, the pursuit of improving outcomes would inevitably emerge. After all, what other purpose would a Kantian imperative, an Aristotelian virtue or a Rawlsian contract serve, if not to ultimately improve the world in some way? Detaching a moral theory from its outcomes would render it arbitrary and devoid of purpose. Put another way, a "consequentialist moral theory" is a tautology. The real question is, which strategy produces the best outcomes, including consideration of highest-order consequences. Should we evaluate each action individually or consistently apply certain heuristics? More precisely, to what extent should we delegate ethical autonomy to individual consciousness moments in order to optimise outcomes? Different moral theories provide distinct answers to this question. Depending on the moral theory one subscribes to, optimal outcomes result from following certain rules (deontology), cultivating certain character traits in people (virtue ethics), adhering to social contracts (contractualism), maximising utility (utilitarianism) etc. In light of these considerations, is every moral theory inherently consequentialist?
I disagree, I think the difference is substantive.
A utilitarian form of consequentialism might tell Alice to save the drowning child in front of her, while it tells Bob to donate to the AMF, but despite acting differently, both Alice and Bob are pursuing the same ultimate agent-neutral aim: to maximize welfare. The agent-relative 'aims' of saving the child or making the donation are merely instrumental aims. They exist only as a means to an end, the end being the fundamental agent-neutral aim that both both Alice and Bob have in common.
This might sound like semantics, but I think the difference can be made clearer by considering situations involving conflict.
Suppose that Alice and Bob are in complete agreement about what the correct theory of ethics is. They are also in complete agreement on every question of fact (and wherever they are uncertain about a question of fact, they are in agreement on how to model this uncertainty e.g. maybe they are Bayesians with identical subjective probabilities for every concievable proposition). This does not imply that they will act identically, because they may still have different capacities. As you point out, Alice might have greater capacity to help the child drowning in front of her than Bob does, and so any sensible theory will tell her to do that, instead of Bob. But still, there is an important difference between the case where they are consequentialists and the case where they are not.
If Alice and Bob subscribe to a consequentialist theory of ethics, then there can be no conflict between them. If Alice realises that saving the child is going to interfere with Bob's pursuit of donating, or vice versa, then they should be able to figure this conflict out between them and come up with an agreed way of coordinating to achieve the best outcome, as judged by their common shared ultimate aims. This is possible because their shared aims are agent-neutral.
But if Alice and Bob subscribe to a non-consequentialist theory (e.g. one that says we should give priority to our own family) then it is still possible for them to end up in conflict with one another, despite being in complete agreement on the answer to every normative and empirical question. For example, they might each pursue an outcome which is best for their respective families, and this may involve competing over the same resources.
If i recall correctly, in Reasons+Persons, Parfit examines this difference around conflicts in detail. He considers the particular case of prioners-dilemma style conflicts (where each party acting in their own interests leaves them worse off than if they had cooperated) and claims this gives a decisive argument against non-consequentialist theories which do not at least switch to more agent-neutral recommendations in such circumstances (and he argues this includes 'common-sense morality').