I sometimes worry that focus on effectiveness creates perverse incentives in strategic settings, leading us to become less effective. Here are a few observations illustrating this concern.
Effectiveness-focused advocacy creates perverse incentives for adversaries
When we conduct cage-free campaigns, the target companies frequently ask us why they are being targeted instead of some other company. While trying to answer that, one immediately realises the following tension. If we say that "because targeting you is the most effective thing we can do", we incentivise them to not budge. Because they will know that willingness to compromise invites more aggression.
When dealing with an effectiveness-focused movement, our adversaries are further incentivised to prevent concrete results. While other movements will have to be destroyed through pressure, an effectiveness-focused movement will easily go away if you just prove to them that they can be more effective elsewhere.
For that reason, in our campaign target selection, we mainly focus on the number of animals affected by the brand and the gap between perceived quality and the reality as these criteria do not lead to perverse incentives.
Sometimes you have to fight back even when it’s not the most effective thing to do
I think this is also one of the reasons behind people's disassociation with EA. People are too quick to abandon EA because EA teaches them to do so.
If an adversary wants to weaken EA, they can damage the brand to make life difficult for movement members. Against EA, they have a significant advantage: EAs want to do the most good - an extremely high bar. As soon as associating with EA no longer aligns with doing the most good, members rationally drop the association. The bully will face minimal resistance since the members will run away to somewhere else where they can be more effective after slightest aggression.
Religious rules survive because they are stubborn
Many religious rules are stubborn, black and white, and context-independent. Pork and alcohol are haram in Islam and that's the end of the story. It doesn't matter if drinking alcohol would help you assimilate into the dominant culture and gain influence, or if social convenience suggests compromise.. This inflexibility allows religious rules to survive even when practiced by minorities facing significant pressure. Flexible rules tend to dissolve when confronting strong opposition.
We might need more commitment devices
It's possible to dismiss these concerns as naive consequentialism and argue that true consequentialism requires commitment devices for such cases. But do we have enough commitment devices in our community?
For me, the biggest reason our cage-free work in Turkey survived was that starting the work itself functioned as a commitment device. I feared that stopping midway would embolden adversaries against future advocates and signal to companies that welfare campaigns were merely temporary fads. I was afraid of making animals worse off by quitting, and this helped me persevere during difficult periods when I have considered "maybe this isn't the most effective thing I can do right now."
I don't think this is an easy problem to solve. Flexibility is deeply woven into EA through cause-neutrality, scout mindset, pragmatism and small identities. Other movements preserve their obstinacy through a combination of dogmas, soldier mindset and refusal to consider effectiveness. It's a challenge to find ways to preserve EA's strategic edge while making it more stubborn.
Interesting ideas!
A hypothesis I found relevant to this phenomenon, similar to yours:
The problem "maximize impact per resources spent" is not well-defined a priori.
For instance, it depends on the time frame and scale: there could be very cost-effective smallish interventions, that can't scale that much, versus very large scale interventions that require massive coordination, investment, "stubborness", etc.
[Of course, you should try to see if such things actually exist in the real world; FWIW, I suspect they do]
It also depends on the entity you consider: is it you as an individual? The small group of people who are willing to listen and do a project with you? The whole EA community? Humanity?
You might be able to build a coherent system that takes into account these various levels though.
Another remark, that has more to do with execution than general principles, which you also touch upon: sharing all the information you have is not always a good idea. Unfortunately, the possible fixes (restricting information access to trusted people/groups) seem to go against the [EA/rationalist/...] culture of truth-seeking, open communication, etc.