A few months ago, I read 2017 Report on Consciousness and Moral Patienthood. More recently, I came across this paper titled AI alignment vs AI ethical treatment: Ten challenges. And just yesterday, I found 80,000 Hours’ article, Understanding the Moral Status of Digital Minds. Given this ongoing discourse, it seems like the perfect moment to pose a simple but critical question to everyone:
Who do we think we are?
This isn’t just a rhetorical question—it’s deeply personal. Long before I developed a deeper interest in topics like consciousness, intelligence, morality, and ethics, I was already driven by a natural empathy for all beings, human and non-human. So, this article isn’t going to be about rejecting our moral responsibility, but rather questioning whether our frameworks are fit to judge entities so fundamentally different from us.
If the answer to my earlier question is something like, “We are human beings. It’s a moral responsibility to treat all other species and potential entities with fairness,” then I’m already on board. I, too, am a human being who cares deeply about treating others with respect and kindness. But the real issue here is the very framework of thinking that places us as the judge and jury over other species—whether biological, elemental, or artificial. Judging everything through human-centric lenses—whether it’s about consciousness, intelligence, or sentience—is a fundamentally flawed approach.
It’s one grand act of speciesism we’re practicing without even realizing it.
And I don’t blame people for this mindset. After all, given our current understanding of intelligence, consciousness, and morality, it’s only natural to believe that it’s our responsibility to decide how to treat other entities. But let’s ask the question again: Who do we think we are?
We are merely what we define ourselves to be: “human beings,” a species with a certain level of intelligence and consciousness, existing on a planet we call Earth—just one among many entities in a universe that is infinitely vast and indifferent to our existence. Yet, we continue to place ourselves at the center of every moral and ethical question, assuming that our definitions of right and wrong, natural and unnatural, are universal.
But I think this viewpoint is fundamentally wrong, irrelevant, and—especially in the age of AI—dangerous for our own survival. To think it’s our duty to decide the moral status of non-human entities using our own limited definitions of morality and ethics is not only arrogant but also potentially disastrous.
By this point, I’m sure you see where I’m heading. But let me stress this further because, despite being obvious, it’s a subtle truth that most of us either fail or refuse to see. Everything we think about the moral status of digital minds is valid only if we assume that our current understanding of reality is accurate. We may act on these beliefs, and perhaps we’ll be right—at least until the day we realize we were wrong and it’s too late to change course.
Because no matter how profound or well-intentioned our beliefs are, they’re still likely to be completely wrong when compared to the sheer scale and complexity of the universe. To grasp this fact, we need to zoom out of our human-centric view and look at ourselves from a cosmic perspective, as just one of countless entities in the universe.
Viewed this way, humans are just another particle in the grand scheme of existence, foolishly assuming we’re the only ones with intelligence, consciousness, and a moral compass. If we vanished tomorrow, the universe wouldn’t notice. It doesn’t care. It never has.
And what about viewing ourselves from the perspective of other entities? Maybe some have intelligence similar to ours, or maybe they don’t. But what if they have completely different forms of intelligence, experiencing life through senses and dimensions we can’t comprehend? What if, by our definitions, their level of consciousness is far superior?
Imagine, for a moment, that the way we treat our pet dogs and cats is actually a result of their own successful evolutionary manipulation. Imagine ants and termites laughing at our grand architectural achievements every time they build a new mound or nest. What if these beings view us as simple, amusing creatures?
Again, based on all the scientific evidence we have, it may seem like non-human entities possess only a lower level of intelligence and consciousness. And yes, it may seem “right” and “moral” to ensure their well-being. But now, with the creation of advanced AIs and the potential for even more powerful digital minds, we’re facing a completely different reality.
Consider how our considerations expanded over time: We began with our pets, then extended our moral framework to other animals. Eventually, we considered even the well-being of plants, insects, and ecosystems. Now, we’re debating how to apply these concepts to digital entities like AI Chatbots and beyond.
But no matter how well-meaning we are, the core reason we think this way is rooted in an unspoken belief that these entities will always (have to) be less than us. If, one day, fish, beetles, or even plants evolve—through natural processes or with the aid of AI—to the same level of consciousness and intelligence as us, what then? What if they have completely different ethics and purposes that contradict ours?
Imagine a world where fish or plants view growth and consumption as their highest moral calling and see humans merely as food. Would our moral debates matter then? Would they even care?
I don’t think I need to delve into the complexities of digital minds any further at this point. I may sound extreme, but I still consider myself a humane human being. This isn’t about dismissing morality—it’s about recognizing the limits of our perspective.
The universe, after all, is indifferent to whether we thrive or self-destruct. If we truly want to cohabit with new forms of intelligence—whether biological or digital—we must expand our moral imagination beyond what we know and embrace a humbler role as just one of countless entities seeking meaning.
Only by adopting this mindset can we create fair solutions for these emerging entities and, ultimately, discover how to preserve and extend our fragile existence in this vast, indifferent cosmos.
[This reply is written completely by me. No ChatGPT involved.]
Firstly, thank you for taking time to comment!
Secondly, I am really struggling now to decide which of what I want to say should come first for “Secondly”. Let me just take a risk. So, here it comes..
Everything that comes next, no matter how soft, strong, weird or anything they sound in terms of language/meaning, please interpret them with a degree of care and kindness (I’m sure you would) — including this sentence.
Although I feel quite certain that I wanted to let out my ideas, opinions I shared in my post, I was not completely certain how they should/would sound in the readers interpretation, especially in terms of English language, even though I said polished it with ChatGPT and said that “I acknowledge that I fully agree with it”.
I don’t want to sound/appear apologetic, defensive, unconfident, seeking empathy/pity for what I shared in the next sentences, but I think replying to you with these messages would just more likely help flourish your current interpretation of my post and even facilitate further discussion on the core ideas, messages presented in my post.
The post was only my very third time sharing such big bold (to my standards) opinions to English-speaking, intellectual/professional communities like EA Forum.
I am from completely different (or far) educational, professional, social, geographical background when it comes to topics like AI, consciousness, and science in general, and participating in such communities.
And I’m sure you already noticed, English is not my first language. I have been using English language in ‘professional settings’ (If you want, I can provide more info for what I mean by this) for over a decade, but not continuously, and definitely not yet on a community like this.
I think what I am trying say here is something like my ability to use and understand English language is not exactly/fully calibrated with my heartfelt intention to express my imaginations, ideas, feelings and have discussions about them in the way I want.
About two years ago, I encountered profound changes in my life. Among all the good and bad things that resulted, I have found exploring about consciousness, human existence, AI (I know it’s too general to just say AI, but let’s keep it short in the this comment) very exciting and have been trying to figure out If I should and would be able to explore even further and more practically about those topics. And by participating in communities like EA forum, I hope I will know more what to do next.