This is a subject I've thought about a lot, so I'm pretty happy to have seen this post :).
I'm not convinced by counterfactual robustness either. For one, I don't think humans are very robust either, since we rely on a relatively specific environment to live. And where to draw the line between robust and non-robust seems arbitrary.
Plus, whether a person is counterfactually robust can be changed without modifying them, and only by modifying their surroundings. For example, if you could perfectly predict a person's actions, you could "trap" their environment, adding some hidden cameras that check that the person doesn't deviate from your predictions, and triggers a bomb if they do deviate. Then that person is no longer counterfactually robust, since any slight change will trigger the bomb and destroy them. But we didn't touch them at all, only some hidden surroundings!
---
I also suspect that we can't just bite the bullet about consciousness and Turing machines appearing everywhere, since I think it would have anthropic implications that don't match reality. Anthropic arguments are not on very solid footing, so I'm not totally confident about that, but nonetheless I think there's probably just something we don't understand yet.
I also think this absurdity you've noticed is an instance of a more general problem, since it applies to pretty much any emergent pattern. The same way you can find consciousness everywhere, you can find all sorts of Turing machines everywhere. So I view this as the problem of trying to characterize emergent phenomena.
---
Investigating causality was the lead I followed for a while as well, but every attempt I've made with it has ended up too strong, capable of seeing imaginary Turing machines everywhere. So lately I've been investigating the possibility that emergence might be about *information* in addition to causality.
One intuition I have for this is that the problem might happen because we add information in the process of pointing to the emergent phenomena. Given a bunch of particles randomly interacting with each other, you can probably point to a path of causality and make a correspondence to a person. But pointing out that path takes a lot of information which might only be present inside the pointer, so I think it's possible that we're effectively "sneaking in" the person via our pointer.
I often also use Conway's Game of Life when I think about this issue. In the Game of Life, bits are often encoded as the presence or absence of a glider. This means that causality has to be able to travel the void of dead cells, so that the absence of a glider can be causal. This gives a pretty good argument that every cell has some causal effect on its neighbours, even dead ones.
But if we allow that, we can suddenly draw effectively arbitrary causal arrows inside a completely dead board! So I don't think that can be right, either. My current lead for solving this is that the dead board has effectively no information; it's trivial to write a proof that every future cell is also dead. On the other hand, for a complex board, proving its future state can be very difficult and might require simulating every step. This seems to point to a difference in *informational* content, even in two places where we have similar causal arrows.
So my suspicion is that random interactions inside walls might not contain the right information to encode a person. Unfortunately I don't know much information theory yet, so my progress in figuring this out is slow.
This is a subject I've thought about a lot, so I'm pretty happy to have seen this post :).
I'm not convinced by counterfactual robustness either. For one, I don't think humans are very robust either, since we rely on a relatively specific environment to live. And where to draw the line between robust and non-robust seems arbitrary.
Plus, whether a person is counterfactually robust can be changed without modifying them, and only by modifying their surroundings. For example, if you could perfectly predict a person's actions, you could "trap" their environment, adding some hidden cameras that check that the person doesn't deviate from your predictions, and triggers a bomb if they do deviate. Then that person is no longer counterfactually robust, since any slight change will trigger the bomb and destroy them. But we didn't touch them at all, only some hidden surroundings!
---
I also suspect that we can't just bite the bullet about consciousness and Turing machines appearing everywhere, since I think it would have anthropic implications that don't match reality. Anthropic arguments are not on very solid footing, so I'm not totally confident about that, but nonetheless I think there's probably just something we don't understand yet.
I also think this absurdity you've noticed is an instance of a more general problem, since it applies to pretty much any emergent pattern. The same way you can find consciousness everywhere, you can find all sorts of Turing machines everywhere. So I view this as the problem of trying to characterize emergent phenomena.
---
Investigating causality was the lead I followed for a while as well, but every attempt I've made with it has ended up too strong, capable of seeing imaginary Turing machines everywhere. So lately I've been investigating the possibility that emergence might be about *information* in addition to causality.
One intuition I have for this is that the problem might happen because we add information in the process of pointing to the emergent phenomena. Given a bunch of particles randomly interacting with each other, you can probably point to a path of causality and make a correspondence to a person. But pointing out that path takes a lot of information which might only be present inside the pointer, so I think it's possible that we're effectively "sneaking in" the person via our pointer.
I often also use Conway's Game of Life when I think about this issue. In the Game of Life, bits are often encoded as the presence or absence of a glider. This means that causality has to be able to travel the void of dead cells, so that the absence of a glider can be causal. This gives a pretty good argument that every cell has some causal effect on its neighbours, even dead ones.
But if we allow that, we can suddenly draw effectively arbitrary causal arrows inside a completely dead board! So I don't think that can be right, either. My current lead for solving this is that the dead board has effectively no information; it's trivial to write a proof that every future cell is also dead. On the other hand, for a complex board, proving its future state can be very difficult and might require simulating every step. This seems to point to a difference in *informational* content, even in two places where we have similar causal arrows.
So my suspicion is that random interactions inside walls might not contain the right information to encode a person. Unfortunately I don't know much information theory yet, so my progress in figuring this out is slow.