If (phenomenally) conscious AI systems are possible, then it is more plausible that AI systems will be welfare subjects. But how plausible is it that conscious AI systems are possible? The answer depends partly on whether there is a close link between consciousness and biology. The modest aim of this draft is to clarify what kind of link between consciousness and biology is crucial in this context.  (I wrote the draft for an academic audience.)

The bottom line: what’s crucial for the possibility of AI consciousness is simply the biological requirement that to be conscious, a system needs to have biological states.

The biological requirement leaves open whether consciousness is itself biological, whether consciousness supervenes on anything biological,  many questions about the biological correlates of consciousness, and whether consciousness has a functional basis. In my view, evidence for and against a close link between consciousness and biology that bears on the possibility of AI consciousness will tend to do so via the biological requirement. If that’s right, then the biological requirement is poised to serve as a crucial thesis, and we would do well to address it when attempting to bring biology to bear on the possibility of AI consciousness.

11

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

A priori, what is the motivation for elevating the very specific "biological requirment" hypothesis to the level of particular consideration? Why is it more plausible than than similarly prosaic claims like "consciousness requires systems operating between 30 and 50 degrees celsius" or "consciousness requires information to propegate through a system over timescales between 1 millisecond and 1000 milliseconds" or "consiousness requires a substrate located less than 10,000km away from the center of the earth"?

(I like the question and examples!)

I take motivations for the biological requirement and for considering it to be empirical rather than a priori. 

One motivation for the biological requirement is that, in the cases we know about, fine-grained differences in consciousness seem to be systematically and directly underpinned by biological differences. This makes the biological requirement more plausible than many other claims at the same level of specificity.

While there isn’t a corresponding motivation for the temperature and timescale claims, there are related motivations: at least in humans, operating in those ranges is presumably required for the states that are known to systematically and directly vary with fine-grained differences in consciousness; going towards either end of the 30-50 C temperature range also seems to render us unconscious, which suggests that going outside the range would do so as well.

Looking beyond the human case, I take it that certain animals operating outside the 30-50 C range makes the temperature claim less plausible than the biological requirement. Admittedly, if we widen the temperature range enough, the resulting temperature claim will be as plausible as the biological requirement. But the resulting temperature claim’s plausibility will presumably be inherited from claims (such as the biological requirement) that are more informative (hence more worthy of consideration) with respect to which systems are conscious.

As for the distance claim, perhaps it would be plausible if one had Aristotelian cosmological beliefs! But I take it we now have good reason to think that the physical conditions that can exist on Earth can also exist far beyond it and that fundamental laws don’t single out Earth or other particulars for special treatment. Even before considering correlational evidence regarding consciousness, this suggests that we should find it implausible that consciousness depends on having a substrate within a certain distance from Earth’s center. Correlational evidence reinforces that implausibility: local physical conditions are strongly predictive of known conscious differences independently of appeal to distance from Earth’s center, and we don’t know of any predictive gains to be had by appealing to distance from Earth’s center. Another reason to doubt the distance claim is that it suggests a remarkable coincidence: the one planet around which that candidate requirement can be met just so happens to be a planet around which various other requirements for consciousness happen to be met, even though the latter requirements are met around only a small percentage of planets.

Setting aside plausibility differences, one reason to consider the biological requirement in particular is that it rules out AI consciousness, whereas the temperature, timescale, and distance claims are compatible with AI consciousness (though they do have important-if-true implications concerning which AI systems could be conscious).

All that said, I’m sympathetic with thinking that there are other candidate barriers to AI consciousness that are as well-motivated as the biological requirement but neglected. My motivation in writing the draft was, given that biology has been and will continue to be brought to bear on the possibility of AI consciousness, it should be brought to bear via the biological requirement rather than via even more specific and less crucial theses about biology and consciousness that are often discussed.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier