Andreas Mogensen, a Senior Research Fellow at the Global Priorities Institute, has just published a draft of a paper on "Maximal Cluelessness". Abstract:
I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that we lack a compelling decision theory that is consistent with a long-termist perspective and does not downplay the depth of our uncertainty while supporting orthodox effective altruist conclusions about cause prioritization.
Yes, though it seems to me that EAs largely think one shouldn't (cf. that Integrity is one of "the guiding principles of effective altruism" as understood by a number of organisations). (Not that you would suggest otherwise.)
A tangentially related comment. What symbolic benefits or harms our actions have will be dependent on our norms, and these norms will to at least some extent be malleable. Jason Brennan has argued that we should judge such symbolic norms by their consequences.
So, we shouldn't just take symbolic benefits into account when we prioritise what action to take, but we should also consider whether to change our symbolic norms, so that the symbolic benefits (which are a consequence of those norms) change. Brennan argues that if epistocracy produces greater direct benefits than democracy, then we should change our symbolic norms so that democracy doesn't yield greater symbolic benefits than epistocracy. Similarly, one could argue that if some effective altruist intervention produces greater direct benefits than some other effective altruist intervention (say diet change), then we should change our symbolic norms so that the latter doesn't yield greater symbolic benefits than the former.
[Edit: I realise now that the last paragraph in your above comment touches on these issues.]