Introduction
When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2]
In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior.
There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3]
Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
Hi there!
I would like to share with you my insights into what I think money does represent.
My thesis is that other than the obvious function of giving us here and now the sense of security or by inflating our ego by having it, it's functionality in the present is quite limited.
What I mean by that is: -I can check the balance account on my computer. In this scenario it's function is to "turn on specific pixels on the monitor".
And it is this activity, the payment, which triggers the true function of money which is the representation of future possibilities.
In other words money become a manifestation of future possibilities in a form of present only when we transfer it and receive something in return.
What is even more interesting is that this function lies deeper than the psychological function mentioned before: If money didn't represent future possibilities we wouldn't feel safe by having it, money wouldn't have much of influence on our ego, other than maybe giving us joy of building a castle from pile of it.
I found this insight interesting hence decided to post it here and as always I'm interested in your thoughts on it.
Cheers!