I'm currently working as an independent researcher, collaborating primarily with the Qualia Research Institute. I previously worked as Chief of Staff at the Institute for Law & AI (formerly "Legal Priorities Project") and as COO at the Center on Long-Term Risk (formerly "Effective Altruism Foundation"). I also co-founded EA Munich in 2015. I have a master's and a PhD in Computational Science from TU Munich and a bachelor's in Engineering Physics from Tec de Monterrey.
🔶 10% Pledger
I'd love to talk to people broadly interested in ways to reduce the burden of extreme suffering in humans, which I think is weirdly neglected in EA. The vast majority of global health & wellbeing work is based on DALY/QALY calculations (and, to a lesser extent, WELLBY), which I believe fail to capture the most severe forms of suffering. There's so much low-hanging fruit in this space, starting with just cataloguing the largest sources of extreme human suffering globally.
I'm eager to talk to potential collaborators, donors, and really anyone interested in the topic. :)
Here's my Swapcard.
Thanks for reading and for your comment, Derek!
there doesn't seem to be any obvious mechanism for general quantum level truths to exert the kinds of very targeted influences that would be necessary for them to explain our beliefs about consciousness
I think it will turn out that the mechanism will not be obvious, mainly because quantum mechanics and fundamental physics more broadly are extraordinarily complex (and I expect that understanding consciousness will be just as difficult as understanding, say, quantum field theory). But, that being said, I do think there exist candidate quantum mechanisms that might explain the macro-level phenomenon of binding, such as entanglement.
Another assumption behind my position (which I also outlined in Indirect realism illustrated (and why it matters so much for consciousness debates)) is that, since I believe consciousness/qualia are real (and a thing, not a process), the only sense in which they can be really real is for their 3rd-person physical correlates to be found at the deepest level of reality/physics. Any correlates that are not at the deepest level—however elaborate—are just useful fictions, and thus (IMO) no different than what e.g. computational functionalists claim.
Hope that makes my views a bit clearer.
Thanks so much for your thoughtful and detailed comment, Mitchell! It seems like we're roughly on the same page regarding the various constraints that a successful theory of consciousness should meet, as well as the class of approaches that seem most promising. Let me just share some immediate reactions I had while reading your comment. :)
The problem with entanglement is that it potentially gives you too much unity
Potentially, yes (though my understanding of entanglement is limited). On the other hand, as Atai has pointed out, "most binding-appreciators strongly, strongly underestimate just how 'insane' it is that we can have any candidate solution to the binding problem *at all* [entanglement] in a universe that remotely resembles the universe described by classical physics." (Here's his full writeup, which I find very compelling.) This makes me think that maybe we will find that entanglement gives us just the right amount of unity (though the specific mechanism might turn out to be pretty elaborate). Do you have any resources on the point about "too much unity"? I'd love to learn more.
First of all, the nature of the structures that hypothetically bridge fundamental physics and conscious states is still wide open, because the mathematics of fundamental physics is still wide open.
Agree, and this is part of what motivates the argument outlined in the last paragraph of the section "Sufficiently detailed replicas/simulations" above.
For me, the core arguments against substrate-indifferent information-based theory of consciousness, revolve around vagueness.
Same for me. The paper "Are algorithms always arbitrary?" makes this case nicely.
But there are a number of challenges to this argument - aren't states of mind vague too?
Maybe, yeah, depending on how we define a state of mind. But as you pointed out, "there is a kind of ontological exactness that states of consciousness must possess," which I also agree with—namely, that at least some moments of experience seem to exhibit some amount of fundamentally integrated information / binding. So if an ontology can't accommodate that, it's doomed. I believe that's the case for information-based theories, since any unity is interpreted by us arbitrarily, i.e. it's epiphenomenal.
They only require that your simulation is a little more fine-grained than we used to believe necessary.
I think "a little more" is doing a lot of work here. If consciousness is a thing/substrate, then any emulation that abstracts away finer levels of granularity will, by definition, not be that substrate, and therefore not be conscious (unless maybe one commits to the claim that the deepest layer of reality is binary/bits, as pointed out above).
In any case, at least for quantum theories of mind to become widely convincing, there needs to be some evidence that quantum biology is playing a role in conscious cognition, evidence which I believe is still quite lacking. Hameroff's microtubules are still by far the best candidate I have, for a biological locus of persistent coherent quantum states, but it seems difficult to get decisive evidence of coherence. The length of the debate about whether quantum coherence occurs in photosynthesis shows how difficult it can be.
I confess I still don't fully understand why we need to definitively prove that coherence has to be sustained. QM plays a causal role in the brain because it plays a causal role in everything, as I was hoping to convey with my xenon example. But I'll keep thinking!
I'll add another candidate for quantum biology into the mix: the Posner molecule (also mentioned by Atai here).
Thanks again! :)
Really cool report! Thank you and congrats on publishing it. :)
I'd love to see an analogous report tackling preventable severe suffering (i.e., beyond preventable deaths). There's a wonderful case report from Uganda's efforts to provide universal palliative care (see this short documentary). A good starting point could be the Lancet Commission report "Alleviating the access abyss in palliative care and pain relief".
In addition if they are sentient, then I would estimate their experience of pain might be between 1x and 1,000x less important than that of an individual human.
This is the reason why I agree that there's a non-negligible chance that insect suffering is "only" moderately important, though I think the chance is higher than "small-ish" (despite the fact that I think insects are 100% conscious/sentient). I come at it from a non-materialist physicalist stance on consciousness, assuming that suffering is (super roughly) proportional to the energy of the electromagnetic field in each nervous system times the degree of dissonance/asymmetry in the field (quantified using some metric tbd). Given the size of many insects, the EM field they generate is very weak, so maybe the worst suffering an insect endures is just not too bad (maybe comparable to stubbing one's toe lightly). But I'm not sure (especially about how suffering scales with size), so I still think there's some chance the suffering is quite bad.
(Thanks for posting this, @Bentham's Bulldog! I enjoyed reading it. 🙂)
(Very much agree with the "Featured in" point!)