First, basics: I'm a first-year Informatics student. At The University of Edinburgh, where I study, Informatics broadly encompasses Computer Science, AI, and Cognitive Science. I initially started this programme to go into AI safety research later and bc of good personal fit and all that. Ik it's a long time in the future and my plan will likely change, but it's good to have plans, right?
I subscribe to the belief that we should maximise for "positive conscious experience" of all beings. Additionally, over the past few months I've grown more and more intrigued with the riddle consciousness poses. My intention has subtly changed from becoming an AI safety researcher to becoming a consciousness researcher by means of AI/Cognitive Science.
Here's my conundrum: Researching consciousness does make sense as to verify the very basis of my EA beliefs. However, it has practically no real altruistic impact. I also only have a very narrow view of its pressingness/tractability/replacability etcetc as it is not widely discussed, e.g., has no career profile on 80,000 hours. All my information basically comes from the people at Qualia Research Institute who are really excited about the issue (which admittedly is quite infectious).
So what I'm saying is I need more views on this! What do you think? How important is solidifying the concept of consciousness for EA? If I don't do it, would someone else do it instead? What are your thoughts on a career in this field?
Thanks if anyone actually read this :)))) And even more thanks for any replies!
In Principia Qualia (p. 65-66), Mike Johnson posits:
What is happening when we talk about our qualia?
If ‘downward causation’ isn’t real, then how are our qualia causing us to act? I suggest that we should look for solutions which describe why we have the sensory illusion of qualia having causal power, without actually adding another causal entity to the universe.
I believe this is much more feasible than it seems if we carefully examine the exact sense in which language is ‘about’ qualia. Instead of a direct representational interpretation, I offer we should instead think of language’s ‘aboutness’ as a function of systematic correlations between two things related to qualia: the brain’s logical state (i.e., connectome-level neural activity), particularly those logical states relevant to its self-model, and the brain’s microphysical state (i.e., what the quarks which constitute the brain are doing).
In short, our brain has evolved to be able to fairly accurately report its internal computational states (since it was adaptive to be able to coordinate such states with others), and these computational states are highly correlated with the microphysical states of the substrate the brain’s computations run on (the actual source of qualia). However, these computational states and microphysical states are not identical. Thus, we would need to be open to the possibility that certain interventions could cause a change in a system’s physical substrate (which generates its qualia) without causing a change in its computational level (which generates its qualia reports). We’ve evolved toward having our qualia, and our reports about our qualia, being synchronized – but in contexts where there hasn’t been an adaptive pressure to accurately report our qualia, we shouldn’t expect these to be synchronized ‘for free’.
The details of precisely how our reports of qualia, and our ground-truth qualia, might diverge will greatly depend on what the actual physical substrate of consciousness is.48 What is clear from this, however, is that transplanting the brain to a new substrate – e.g., emulating a human brain as software, on a traditional Von Neumann architecture computer – would likely produce qualia very different from the original, even if the high-level behavioral dynamics which generate its qualia reports were faithfully replicated. Copying qualia reports will likely not copy qualia.
I realize this notion that we could (at least in theory) be mistaken about what qualia we report & remember having is difficult to swallow. I would just say that although it may seem far-fetched, I think it’s a necessary implication of all theories of qualia that don’t resort to anti-scientific mysticism or significantly contradict what we know of physical laws.
Back to the question: why do we have the illusion that qualia have causal power?
In short, I’d argue that the brain is a complex, chaotic, coalition-based dynamic system with well defined attractors and a high level of criticality (low activation energy needed to switch between attractors) that has an internal model of self-as-agent, yet can’t predict itself. And I think any conscious system with these dynamics will have the quale of free will, and have the phenomenological illusion that its qualia have causal power.
And although it would be perfectly feasible for there to exist conscious systems which don’t have the quale of free will, it’s plausible that this quale will be relatively common across most evolved organisms. (Brembs 2011) argues that the sort of dynamical unpredictability which leads to the illusion of free will tends to be adaptive, both as a search strategy for hidden resources and as a game-theoretic advantage against predators, prey, and conspecifics: “[p]redictability can never be an evolutionarily stable strategy.”