(Warning: a thought experiment I'm referencing here is a spoiler for a novel called Permutation City by Greg Egan. I've added several lines of dust below to give you a chance to bail out if you don't want to be spoiled.)
.
..
....
..
......
...
..
.
...
.
....
.
"The problem of the dust" is, I think, called "dust theory" in the novel. The idea is that, if you buy that simulations of people run on computers can be conscious, then presumably you think that consciousness is substrate-independent. Also, presumably you identify consciousness with a series of discrete states, and are using some mapping from the physical world to those states (e.g. the mapping from voltages to the states {0, 1} that we use in computers). Presumably also the specific mapping doesn't matter to you — you don't care at what voltage we've decided to call something 0 and at what voltage we've decided to call it 1, for instance.
But if the substrate doesn't matter to you, and neither does the mapping, then what stops me from looking at a cloud of dust floating in space, and concocting some extremely contrived mapping which says that the position of the dust particles at time t so happens to represent the state of your brain as you open your mouth to eat an ice cream sandwich, the position at time t+1 represents your state as you bite down, etc. ? Have I now "simulated" you eating an ice cream sandwich?
(From what I remember, this is just dust-theory-lite, without the additional idea of messing with the temporal ordering of the states. But I think it's all I need to make the point.)
Bostrom's Simulation Argument asks us to consider a posthuman civilization with an enormous amount of computing power, and whether it will devote some of that power to simulating its ancestors. If so, it argues, and if such a civilization is likely, then we're probably in such a simulation. But the problem of the dust is that it seems like we should think a very large (or infinite?) number of simulations are happening anyway, and hence it seems we're probably in one of those. I wouldn't say that "dust theory" refutes the Simulation Argument, but to me it seems to indicate that there's something confused about my concept of "being simulated," and hence I feel inclined to back off arguments that involve it.
I'm curious about solutions to 'the problem of the dust' and/or how people square it with their beliefs about the simulation hypothesis.
(Greg Egan says in his FAQ on the novel that he takes dust theory "[n]ot very seriously, although I have yet to hear a convincing refutation of it on purely logical grounds." He points out that "I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.")
Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros's brain, then say "at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire."
This could conceivably fail if there's not enough pairs of dust specks in the universe to make the numbers work out. The "pure time" mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.
...
I agree that it seems like there's something around "how complex is the mapping." I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?