Former username Ikaxas
Maybe I'm missing something, but I think the idea on that section is simply: "Some people think it only makes sense to pledge if you are not already donating 10%. But here's are some reasons to pledge even if you are already donating 10%, that you may not have thought of." It's not claiming those reasons are decisive, or that everyone, or even most people, who is already donating 10% should still take the pledge. The only misconception they're claiming is thinking that there is zero reason to take the pledge if you are already donating 10%.
So I am a philosophy grad student with a shallow familiarity with this literature. The way I understand the people who object to the evo-debunking, they argue that the evolution stuff is a red herring---basically any causal story about the origins of our moral intuitions would do the same work in the argument, the empirical details don't matter. The real work is going on in the philosophical side of the argument, and that, they think, doesn't hold up. Might post again later with some paper recs.
So the terminology here gets used differently by different people, but the view that moral statements can be true or false is usually called "cognitivism", not "realism" (though there definitely are people who use "realism" for that view). My own personal preference is to define realism as cognitivism plus the metaphysical claim that moral properties are mind-independent (i.e. not grounded in facts about anyone's moral beliefs or attitudes).
I agree it may be difficult for a utilitarian to fully deceive themselves into giving up their utilitarianism. But here's an option that might be more feasible: be uncertain about your utilitarianism (you probably already are, or if you aren't you should be), and act according to a theory that both 1. Utilitarianism recommends you act according to, and 2. You find independently at least somewhat plausible. This could be a traditional moral theory, or it might even be the result of the moral uncertainty calculation itself.
For the tasks I use it for (mainly writing help), Claude Opus is often better than GPT-4