First, basics: I'm a first-year Informatics student. At The University of Edinburgh, where I study, Informatics broadly encompasses Computer Science, AI, and Cognitive Science. I initially started this programme to go into AI safety research later and bc of good personal fit and all that. Ik it's a long time in the future and my plan will likely change, but it's good to have plans, right?
I subscribe to the belief that we should maximise for "positive conscious experience" of all beings. Additionally, over the past few months I've grown more and more intrigued with the riddle consciousness poses. My intention has subtly changed from becoming an AI safety researcher to becoming a consciousness researcher by means of AI/Cognitive Science.
Here's my conundrum: Researching consciousness does make sense as to verify the very basis of my EA beliefs. However, it has practically no real altruistic impact. I also only have a very narrow view of its pressingness/tractability/replacability etcetc as it is not widely discussed, e.g., has no career profile on 80,000 hours. All my information basically comes from the people at Qualia Research Institute who are really excited about the issue (which admittedly is quite infectious).
So what I'm saying is I need more views on this! What do you think? How important is solidifying the concept of consciousness for EA? If I don't do it, would someone else do it instead? What are your thoughts on a career in this field?
Thanks if anyone actually read this :)))) And even more thanks for any replies!
I am a 3rd year PhD student in consciousness neuroscience. After studying 3 years in this field I tend to think that better understanding consciousness looks less important than standard EA causes areas.
Understanding consciousness is probably not very neglected. Indeed, although the field of consciousness science is relatively young and probably still small relatively to other academic fields, it is a growing academic field with established lab teams such as the Sackler center for consciousness science, the tlab, Stan Dehaene lab, Giulio Tononi and more. Consciousness is a fascinating problem that attract many intellectuals. There is an annual conference on the science of consciousness organised every year that probably gather hundreds of academics https://assc24.forms-wizard.co.il/ (unsure about the number of participants)
Although I appreciate the enthusiasm of QRI and the original ideas they discuss, I am personally concerned by the potential general lack of scientific rigor that might be induced by the structure of QRI, but I would need to engage more with QRI content. Consciousness (noted C) is a difficult problem that quite likely requires collaboration between a good amount of academics with solid norms of scientific rigor (i.e. doing better than the current replication crisis).
In terms of importance of the cause, it is plausible that there is a lot of variation in architecture and phenomenology of conscious processing and so it is unclear how easily results in current, mostly human-centric, consciousness science would transfer to other species or AIs. On the other hand this suggests that understanding consciousness in specific species might be more neglected (but maybe having reliable behavioral markers of C might already go a long way to understand moral patienthood). In any case I have a difficult time making the case for why understanding consciousness is a particularly important problem relative to other standard EA causes.
Some potential interest to further specify that could potentially make the case for studying consciousness more:
Overall I am quite skeptical that on the margin consciousness science is the best field for an undergrad in informatics compare to AI safety or other priority cause areas.
This is why I'm pursuing Cognitive Science.