I think this is an important question. My own actual answer is that I'm very unsure. It seems plausible that implementing or advocating for such a policy would be wise, or that it would be counterproductive.
The following thoughts and links will hopefully be more useful than that answer:
1. This seems very reminiscent of the arguments Bostrom makes in the paper The Vulnerable World Hypothesis (and especially his "easy nukes" thought experiment). From that paper's abstract:
Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the ‘semi-anarchic default condition’. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
(Perhaps half-remembering this paper is what leads you to say "It's likely that I have seen this term mentioned somewhere else in the past by someone else".)
2. Some other relevant sources include:
3. I think this is an important topic. In my draft series on "Crucial questions for longtermists", one question I list is "Would further development or deployment of surveillance technology increase risks from totalitarianism and dystopia? By how much?"
I'm also considering including an additional "topic" that would contain a more thorough set of "crucial questions" on the matter.
4. I don't think the unilateralist's curse is quite the right term in your argument. The potential for huge harms from unilateral action are indeed key, but the unilateralist's curse is something more specific.
Essentially, the curse is about a specific way in which random distribution of misjudgement can lead to the "most optimistic" person acting, and thereby to harm occurring, despite the actor themselves having genuinely aimed to do good. (I think it's also meant to be when this happens despite people's average estimates being accurate, rather than them being systematically overly optimistic, though I can't remember that for sure.) From the paper on the curse:
The unilateralist’s curse is closely related to a problem in auction theory known as the winner’s curse. The winner’s curse is the phenomenon that the winning bid in an auction has a high likelihood of being higher than the actual value of the good sold. Each bidder makes an independent estimate and the bidder with the highest estimate outbids the others. But if the average estimate is likely to be an accurate estimate of the value, then the winner overpays. The larger the number of bidders, the more likely it is that at least one of them has overestimated the value.
The unilateralist’s curse and the winner’s curse have the same basic structure. The difference between them lies in the goals of the agents and the nature of the decision. In the winner’s curse, each agent aims to make a purchase if and only if doing so will be valuable for her. In the unilateralist’s curse, the decision-maker chooses whether to undertake an initiative with an eye to the common good, that is, seeking to undertake the initiative if and only if the initiative contributes positively to the common good.
This could apply in cases like well-intentioned but harmful dual-use research, or in well-intentioned release of hazardous information. Interestingly, it could also apply to widely promoting this sort of "vulnerable world" argument - it's possible that:
- the people who would do so are those who overestimate the expected value of surveillance, preventing policing, etc.
- the "real" expected value is negative
- just a few people widely promoting this sort of argument is enough for major harm to occur, because then the idea can be picked up by others, acquire a life of its own, etc.
In any case, the possibility for well-intentioned yet extremely harmful actions, and the way the unilateralist's curse boosts the likelihood of them, does provide additional reason for surveillance. But the case for surveillance doesn't necessarily have to rest on that, and you seem most focused on malicious use (e.g., terrorism).
5. I've collected a bunch of sources related to the topics of the unilateralist's curse, downside risks/accidental harm, and information hazards, which might be interesting to you or some other readers.
Hope that's helpful!
Thanks for posting such a detailed answer!