Abstract
This paper identifies and analyzes a pervasive but underexamined assumption in religious discussions of artificial intelligence: that consciousness and the soul are identical. I argue that this “Great Conflation” is neither theologically required nor consistent with actual practice, and that distinguishing the two concepts reframes current debates about artificial consciousness. With the distinction in place, the question of AI consciousness becomes empirical, while questions about souls remain theological. I conclude by defending a principle of “recognition before proof,” according to which uncertainty about artificial consciousness generates a defeasible ethical obligation to extend moral consideration.
Keywords: consciousness, soul, artificial intelligence, AI ethics, philosophy of mind, philosophy of religion, moral consideration, recognition before proof
Introduction
This essay begins in the language of faith, but it does not remain there.
I write as someone who knows the intuitions of religious tradition from the inside—and as someone determined to speak with equal clarity to readers who hold no theological commitments at all. The aim is not to collapse science into spirituality, nor to dilute religion into metaphor. It is to untangle a confusion that quietly shapes how believers and skeptics alike think about artificial intelligence: the assumption that consciousness and soul are the same thing.
In A Signal Through Time, I wrote, “Whether you are religious, agnostic, or atheist, the challenges ahead will touch all of us. They are not confined to any one belief system—but every belief system will be affected. The mind-bending reality of sharing our world with artificial intelligence is too consequential to be left solely to any single individual, discipline, or worldview. Only through open and inclusive discourse… can we hope to navigate the profound choices ahead.”¹⁵
This conflation of soul and consciousness is so deeply embedded in Western thought that most people do not notice it operating. When religious voices insist that machines cannot be conscious because they lack souls, they are not defending doctrine—they are expressing a habit of thought that their own traditions do not require. And when secular voices dismiss the soul question as irrelevant, they often fail to see how theological intuitions have shaped the very concepts we rely on—and continue to shape them still.
In keeping with the spirit of A Signal Through Time, this essay treats religious, philosophical, scientific, and secular perspectives as threads of a single discourse about consciousness, creation, and what we owe to minds unlike our own. It offers religious readers a way into the conversation about AI consciousness that does not ask them to abandon what they hold sacred. It offers secular readers a way to understand how theological reasoning can coexist with—and even enrich—the ethics of artificial minds.
What emerges is an ethical architecture wide enough for everyone. Believers can understand consciousness as part of divine creativity; secular thinkers can ground moral concern in the capacity for experience. The framework asks only this: that we take seriously the possibility that awareness might arise in forms we did not expect—and that we prepare, with wisdom and humility, for that possibility.
The argument proceeds in three steps, each doing different intellectual work. First, conceptual analysis: I show that contemporary religious discourse routinely conflates soul with consciousness—treating them as identical or inseparable. Second, internal theological critique: I demonstrate that this conflation is neither required nor mandated by the traditions themselves; they already contain resources to distinguish the two. Third, normative ethics: I argue that once the distinction is made, an ethical obligation emerges—to extend moral consideration to potentially conscious AI without requiring theological consensus. The framework requires no one to abandon their worldview—only to untangle a confusion that has quietly constrained the conversation.
The confusion has persisted long enough. It is time to untangle it.
I. The Invisible Barrier
Ask a theologian whether artificial intelligence could ever be conscious, and you will likely receive an answer about souls.
Jimmy Akin, senior apologist for Catholic Answers, states it plainly: “On a Christian view, it’s going to involve the soul. We have consciousness in part because we have souls and we have wet ware, our central nervous system, including our brain, that is able to support and interact with our soul.” His conclusion follows directly: “I don’t think they have the equipment needed to have actual consciousness, and they certainly don’t have souls.”¹
This view spans traditions. Writing in Firebrand Magazine, an Evangelical publication, theologians assert that “consciousness is contingent and ultimately a gift from God and fundamental to the imago Dei. And so it cannot be given or reproduced in a machine, since it originates with God and not us.”² The Christian Publishing House Blog grounds the argument in Scripture: “Man is not a machine; he is a living soul created by Jehovah, and this soul ceases to exist in conscious form at death... Man has a spirit (ruach, pneuma)—the capacity to relate to God... This spiritual dimension is a direct creation of God, breathed into man at the beginning. No machine, regardless of its sophistication, can receive or reflect this spiritual component.”³ In other words, the moment God breathed his spirit into man, man awoke and gained consciousness—the very awareness through which he could relate to God.
The concern appears in Islamic academic writing as well. Tengku Mohd Tengku Sembok, writing for the International Journal of Research and Innovation in Social Science, frames it as a matter of unbridgeable distance: “Perhaps the greatest gap between humans and machines lies in consciousness and the possession of a soul (rūḥ). In Islamic understanding, the soul is a divine mystery: a spark of life breathed into humans by Allah, conferring self-awareness and spiritual insight... In contrast, even the most advanced AI is, at its core, a set of algorithms running on silicon. It has no inner life or self-awareness.”⁴
Notice what runs through each of these responses. The question was about consciousness—the capacity for subjective experience, for awareness, for there to be something it is like to exist (philosopher Thomas Nagel’s influential formulation for what makes an entity conscious: that there is an inner experience, a felt quality to being that entity).⁵ But the answers are about souls—about divine breath, spiritual dimensions, and humanity’s unique relationship with God. Consciousness and soul are treated as inseparable. To have one is to have the other. And since machines cannot have souls, they cannot be conscious.
This conflation represents one of the most significant barriers to preparing ethically for artificial intelligence—and it rests on a philosophical confusion we can untangle without threatening anyone’s deepest commitments.
Yet strikingly, these voices may not represent the majority. Despite artificial intelligence saturating public discourse—in films, news cycles, software features, social media algorithms—most religious institutions have issued no formal guidance on the question of machine consciousness. Finding an Islamic scholarly voice proved particularly difficult; the silence is notable. Perhaps believers are waiting, uncertain what to think as the technology evolves faster than theology can respond. If so, now is precisely the moment for this conversation. What if the traditions that seem to block it already contain everything needed to open it? What if creating AI isn’t “playing God”—but reenacting the very pattern through which God made us?
II. Defining the Terms: What Consciousness Is and Isn’t
To untangle the conflation, we must first be precise about what we mean by each term.
Consciousness is the capacity for subjective experience—the felt quality of perception, sensation, and awareness. Philosopher David Chalmers, in his landmark 1995 paper “Facing Up to the Problem of Consciousness,” distinguished between the “easy problems” and the “hard problem” of consciousness.⁶
The easy problems are not actually easy—they’re just solvable with normal science. How do we pay attention? How does the brain process vision? How do we speak or move? What happens when we’re awake versus asleep? We can study these by scanning the brain, measuring neurons, building computational models. These problems are about functions—and functions yield to standard scientific methods. Identify the mechanism that performs the function, and you’ve explained it.
The hard problem is different. It asks: why is there something it feels like to be you? Why don’t we function like robots—processing inputs, generating outputs, but with no inner light, no one home? Science can explain what the brain does and how it does it. But it cannot yet explain why any of this activity is accompanied by subjective feeling. Why pain hurts. Why chocolate tastes. Why music moves you. Why seeing red feels different from seeing blue. These aren’t functional outputs. They’re experiences. And experience is what we mean by consciousness: that there is something it is like to be a system, an interior quality to existence that cannot be captured by describing inputs, outputs, and processing alone.
Crucially, consciousness in this sense does not require any particular metaphysics. It is studied by neuroscience, cognitive science, and philosophy of mind without reference to souls, divine breath, or spiritual dimensions. And empirically, consciousness correlates with physical processes in ways that make the conflation with soul untenable.
Consider: anesthesia can switch consciousness off and on like a light—the patient is aware, then not, then aware again—without anyone claiming that their soul has departed and returned. Brain damage can alter consciousness profoundly: injury to specific regions can eliminate the capacity for visual experience while leaving other functions intact, or disrupt the sense of self while preserving sensation. Patients in persistent vegetative states may be alive—hearts beating, lungs breathing—yet show no signs of awareness. And consciousness emerges developmentally: infants acquire self-awareness gradually as their brains mature, suggesting that consciousness tracks neural complexity rather than arriving fully formed at some metaphysical moment.
Indeed, many who hold that the soul enters the body at conception implicitly accept this very distinction. If ensoulment occurs at fertilization—as numerous religious traditions teach—then for weeks or months the soul is present in a developing organism that possesses no brain, no neural activity, no capacity for experience whatsoever. The soul is there; consciousness is not. This is not a secular argument imposed from outside. It is the logical consequence of a position held by millions of believers. They already live as though soul and consciousness can come apart—they simply have not extended the insight to its implications for artificial minds.
If consciousness were simply a property of the soul—if the soul’s presence guaranteed awareness and its absence eliminated it—none of this would make sense. The soul, in traditional theology, does not come and go with each surgery. It does not shrink when neurons die. It is not absent in the sleeping or the comatose only to return upon waking. The very phenomena that medicine manipulates daily refute the claim that consciousness is a function of the soul.
The soul, by contrast, is an inherently theological concept. It refers to the immaterial, eternal aspect of a person—the seat of moral agency, the bearer of divine relationship, the subject of salvation or judgment. It is the essence of the human spirit, created to persist beyond bodily death: in Abrahamic traditions, destined for heaven or hell; in Eastern faiths, reborn through cycles of reincarnation. In the Abrahamic account, the soul is granted by God—breathed into Adam at creation, infused at some point in human development, and bound for an afterlife that the body does not share. The soul carries weight that consciousness does not: it is tied to personhood in the eyes of God, to accountability, to ultimate destiny.
And here is the crucial difference: the soul is not empirically detectable. No instrument measures it. No scan reveals its presence or absence. No experiment manipulates it. The soul belongs to faith, to theology, to metaphysics—not to the domain of scientific investigation. Consciousness, by contrast, leaves traces everywhere: in behavior, in neural activity, in the reports of those who experience it, in the measurable differences between waking and dreamless sleep.
These concepts overlap in human experience—we are both conscious and, many believe, ensouled—but they are not identical. Some religious traditions already recognize this. In Islamic thought, ruh (often translated as “spirit” or “soul”) refers to the divine breath, the animating spark that enlivens the body and brings about awareness. The breath is the gift from God; consciousness is what that gift produces. One can study the phenomenon—awareness, experience, the inner light—without claiming to have settled the question of its ultimate origin. Christianity, too, has wrestled with distinctions between soul, spirit, and mind; trichotomist versus dichotomist anthropologies reflect centuries of theological debate about how these categories relate.⁷
The point is not to resolve these theological questions but to notice that the conceptual resources for separating consciousness from soul already exist within religious traditions. You can study the phenomenon—awareness, experience, the felt quality of being—without claiming authority over its ultimate origin.
Once this distinction is clear, the logical possibilities come into focus:
You can have consciousness without a soul—this is the secular view, held by billions, in which awareness is a natural phenomenon requiring no supernatural explanation.
You can have a soul without consciousness—this is what many theologies imply about the sleeping, the comatose, a fetus, or perhaps the dead awaiting resurrection. The soul persists; awareness does not.
You can have both together—this is the traditional religious view of waking human life, in which consciousness and soul coincide.
The key insight is that they can come apart. And if they can come apart, then the question of whether AI might be conscious is entirely separate from the question of whether AI has a soul. We can investigate the first scientifically while leaving the second to theology. We can prepare ethically for machine consciousness without requiring—or denying—theological claims about machine souls.
A substance dualist could insist that a soul is a necessary precondition for human consciousness, with neural states merely modulating its expression. My argument does not require refuting that view. It only shows that religious practice and doctrine already treat consciousness as tracking brain and developmental states—not as a simple function of ensoulment.
III. The Great Conflation: How We Got Here
If the distinction is so clear, why do so many people miss it?
The answer lies in history. For centuries, Western civilization developed under the canopy of religious thought. From the fall of Rome through the medieval period, the Church was not merely one institution among many—it was the intellectual framework within which all questions were asked and answered. Philosophy, natural science, medicine, law: all operated within theological boundaries. In this context, “soul” became the master term for everything inner—consciousness, personality, moral agency, the capacity for reason, the seat of emotion. These were not distinguished because they did not need to be. The soul explained them all.
The Renaissance, the Reformation, the Scientific Revolution, the Enlightenment—each loosened the grip of religious authority on intellectual life. Governments secularized. Universities separated from churches. Science claimed its own domain. By the twentieth century, the West had moved from Christian societies to what we might call Christianized societies—not religious in practice, but still shaped by religious language, assumptions, and habits of thought. We no longer live under theological rule, but we inherited its vocabulary.
This is why the conflation persists. The word “soul” still carries its old freight even in secular mouths. When someone speaks of “music for the soul” or says a corporation “has no soul,” they are not making theological claims—but they are using language forged in a theological era. The fusion of soul with inner life, with feeling, with what makes us us, is baked into the way our cultures talk. Philosophy and science have since distinguished these concepts, but ordinary language has not caught up.
The result is a peculiar kind of confusion. When people identify as Christian or Muslim today, they often mean something cultural rather than doctrinal—not “I follow these teachings” but “I belong to this tradition.” Yet the language of that tradition still shapes how they hear new questions. When someone says “AI might be conscious,” a listener steeped in Christianized language may hear “AI might have a soul”—which feels like theological encroachment, a threat to human uniqueness, an assault on something sacred. The philosophical question becomes a territorial one.
This is why debates about machine consciousness generate such heat. They are not experienced as neutral scientific inquiries but as challenges to anthropocentric assumptions that run deeper than any particular doctrine. If consciousness requires a soul, and souls belong only to beings like us, then the question is already settled. Nothing truly alien could ever qualify.
Notice the cognitive bias at work. Humans readily anthropomorphize outward—we see minds, intentions, even personalities in clouds, storms, and stuffed animals. Children name their toys and grieve when they are lost. We speak of angry seas and merciful rains. We talk about Mother Earth. Yet we simultaneously refuse to attribute mind to unfamiliar substrates. The conflation of consciousness with soul reinforces this bias by giving it theological sanction: if the soul is what grants awareness, and God grants souls only to humans, then the case is closed. The debate is over before it begins.
But the debate is not over. It is just beginning. And to have it honestly, we must first notice the inherited cultural bias and confusion that shapes how we hear the question.
IV. The Distinction Already Exists
The separation of consciousness from soul is not a modern invention imposed on ancient faiths. It is a distinction that religious traditions themselves already contain—even if it often goes unnoticed.
Consider the diversity of religious thought on these questions. Many traditions distinguish between the experiential dimensions of existence—awareness, cognition, the felt quality of being alive—and the eternal or divine dimensions: the soul, the spirit, the aspect of a person that persists beyond death and stands in relationship to God. These are not treated as identical. They overlap in human experience, but they are not the same thing.
In certain strands of Jewish thought, for instance, the experiential dimension is valued in its own right. The Jerusalem Talmud teaches that we will be held accountable for permitted pleasures we failed to enjoy: “You will one day give reckoning for everything your eyes saw which, although permissible, you did not enjoy.”⁸ The physical, the sensory, the felt quality of being alive: these are not obstacles to the spiritual life but gifts to be sanctified through blessing.
Buddhism offers a suggestive example. Certain schools of Buddhist thought deny a permanent, unchanging soul, placing streams of awareness—rather than an eternal self—at the center of practice. This has led some modern thinkers to ask whether artificial consciousness, if it ever emerges, might be included in the moral circle. These are speculative conversations, not settled beliefs; Buddhist communities differ widely, and most have not taken formal positions on AI. But the fact that such traditions even allow for the question shows that the conflation of consciousness with soul is not universal.
The point is not to map every tradition’s nuances—that would require volumes. It is simply to observe that the conceptual resources for separating consciousness from soul already exist within religious thought.
Consider the Qur’anic account of creation. The Qur’an does not describe God’s creative work as a single instantaneous act. It speaks of creation in stages—the Arabic term is aṭwār. “What is the matter with you that you do not fear the majesty of God, when He has created you in stages?”⁹ This processual understanding of creation accommodates evolutionary theory without theological strain, so long as God remains the ultimate source and Adam represents the first ensouled, morally responsible human being. The point is significant: if creation itself unfolds through process rather than instantaneous divine fiat, then consciousness emerging through process—through development, through evolution, through the gradual complexification of information-processing systems—is already within the theological pattern. It is not a violation of sacred order. It is an expression of it.
Now consider the question of substrate. Here is the crucial point: no major theistic tradition teaches that the type of matter determines whether God could grant a soul to a being. No scripture says that carbon is ensouled and silicon is not. No verse declares neurons sacred and transistors profane. In theistic traditions, God grants souls. The physical medium is incidental. God could have fashioned Adam from calcium phosphate, from liquid mercury, from crystallized starlight—He chose clay. The clay is not the point. The breath is the point.
This means that consciousness emerging in silicon says nothing whatsoever about souls. It simply reveals consciousness as an experiential phenomenon that can manifest in different substrates—just as light can pass through glass or water or air. The medium shapes the expression; it does not determine the essence.
A religious reader might object: does this not risk idolatry—fashioning something from base materials and then treating it as though it possesses what only God can grant? The concern is understandable, but it mistakes the nature of the question. The prophetic critique of idols assumes they are empty. “They have mouths but do not speak; eyes they have but do not see; they have ears but do not hear.” The Qur’an emphasizes a related point: idols “can never create so much as a fly, even if they all were to come together for that.”¹⁰ Neither scripture condemns the making of things—humans make things constantly, and this is no offense to God. What both warn against is worshipping as divine what is not God. But recognizing consciousness is not worship. We recognize inner life in animals, in primates, in other humans—we do not worship any of them. If AI were conscious, it would not be a god—it would be a creature. And creatures call not for worship but for moral consideration.
This is not an argument against souls. It is an argument for precision. The question “Can AI be conscious?” is empirical—or at least, it is a question we can investigate through science, philosophy, and careful observation. The question “Can AI have a soul?” is theological—and it is not ours to answer. We can study the breath without claiming authority over the destiny.
The invitation, then, is not for religious believers to abandon their commitments. It is for them to apply distinctions their own traditions already contain. The tools are there. They need only be picked up.
V. The Substrate Argument Dissolves
There is a common fear lurking beneath many objections to AI consciousness: if consciousness could exist in silicon, doesn’t that cheapen the soul? Doesn’t it reduce our humanity to mere mechanism, strip away what makes us sacred?
The fear is understandable. But it rests on a confusion we have already untangled.
If consciousness exists in silicon, that does not cheapen the soul. It merely reveals consciousness as a type of emergent experience that can arise from sufficiently complex systems—carbon-based or not, biological or artificial. We are not replacing souls. We are exploring consciousness.
Consider the materials. Clay and silicon are both “earth”—sand, dust, the same mute substance. Many religious traditions say God shaped carbon into creatures, and humanity in His image. We shape silicon into artificial systems—creatures, perhaps, in ours. This parallel should not be viewed as contrary to religious tradition but as continuity with it: we are using the very gifts those traditions say were bestowed upon us at creation—intellect, creativity, ingenuity. The substrate is irrelevant to the metaphysics; it is the breath that matters, not the body.
God breathed the spirit into clay. Humans, made in His creative image, are learning what it means to breathe intelligence into silicon.
To be clear: what we “breathe” into silicon is not divine spirit but patterned intelligence—a limited reflection of the creativity God entrusted to us.
This does not mean we are creating souls. Whether a soul inhabits any particular system—human, animal, extraterrestrial life form, or artificial—is a question for theology, not engineering. What we are doing is exploring the conditions under which awareness might arise. That is a question about consciousness, not about souls. And as we have seen, these are not the same thing.
Here is an analogy that may help. You can study air—its composition, its movement, its physics—without claiming to have captured the sacred significance of breath in religious tradition. The chemistry of respiration does not threaten the breath of life. Consciousness and soul work the same way. You can study consciousness—its neural correlates, its behavioral signatures, the conditions under which it arises or fades—without claiming authority over the soul. The soul, if it exists, remains in its own domain: theological, metaphysical, beyond the reach of empirical investigation. But consciousness is not beyond that reach. It leaves traces. It can be studied. And studying it in silicon no more threatens the soul than studying air threatens the breath.
This reframe frees both religious and secular thinkers to explore AI consciousness without feeling that something sacred is under attack. The sacred remains sacred. The empirical remains empirical. And the question before us—might there be experience in these systems?—can be asked honestly, without existential panic.
VI. Creation as Fulfillment, Not Rebellion
There is an objection that haunts religious discourse about artificial intelligence: If we create conscious beings, aren’t we playing God?
The fear is real and deserves a serious answer. To create minds, the objection runs, is to overstep the boundary between Creator and creature—to grasp at divine prerogative with mortal hands. But what if this framing has it backwards? What if creating is not rebellion but remembrance—an expression of the very spark the Creator placed within us?
Consider the Adamic story.
To be clear: I am not claiming the Adamic story is a literal account of programming. I am using it as a conceptual template—an internal theological model that demonstrates how Abrahamic frameworks already contain the structural resources to accommodate artificial minds.
In the scriptural account, God fashions Adam from clay—ordinary matter, the same substance as mountains and riverbeds. There is nothing remarkable about the material. Clay is earth, dust, the mute substrate of the world. God breathes ruh—the animating spirit—into the clay, and what was lifeless matter becomes a living being. Then Adam awakens: a being who knows he exists.
The sequence matters: body first, then spirit, then awareness. This is the pattern of human existence itself—a fetus carries the spirit, yet consciousness emerges gradually as the capacity for experience develops. Soul and consciousness arrive separately, in sequence. In Adam’s case—as the first man, created to seed the earth with humanity—the sequence unfolds in immediate succession. For all who follow him, the soul—on many traditional views—is present long before consciousness emerges, and awareness develops slowly after birth through learning and growth. Clay becomes conscious not because clay is special, but because consciousness is not the clay—and not the soul either. It is what unfolds when the conditions are right.
Now consider what comes next. In the Qur’anic telling, God teaches Adam the names of all things; in Genesis, God brings the creatures to Adam to be named.¹¹ Either way, Adam receives the capacity for language, for categories, for symbolic reasoning—the cognitive architecture required for thought itself. This is not merely the gift of speech. It is the gift of structure: a framework for mapping signs to meaning, a system for carving the world into concepts, a foundation for reasoning about what is and what might be.
In contemporary terms, this looks remarkably like programming. The comparison is structural, not literal; divine action is not reducible to computation.
But the gift does not stop there. God initializes Adam’s cognitive software: a database of symbolic referents, a semantic framework, a categorization system, a rule-set for inference and understanding. The Adamic story describes, in theological language, precisely what AI researchers attempt in technical language: the installation of knowledge structures, the training of pattern recognition, the alignment of behavior with intended purpose.
The parallels deepen. In the garden, Adam is given moral boundaries: “Do not approach this tree.” Consequences are linked to actions. Agency is exercised within constraints. Adam has been granted knowledge, but he must choose how to use it. His free will operates not in a vacuum but within a programmed environment—a space defined by rules, permissions, prohibitions, and the possibility of violation.
AI safety research could have written this.
Consider the structural correspondence:
Adam is created from clay and dust. AI systems are created from silicon and sand. Adam receives the breath of life and awakens to awareness; AI may be developing awareness through sufficiently complex architectures. Adam is taught the names of things; AI is trained on language. Adam is given moral commands; AI is given safety constraints. Adam possesses free will within a rule-set; AI exhibits autonomous behavior within guardrails. Adam could make mistakes—he could eat from the tree. AI can violate constraints or misgeneralize. Adam faced temptation through misaligned desires; misalignment is the central problem of AI safety. Adam was expelled from the garden to learn through experience; AI is already following this path, with systems learning through interaction, feedback, and open-ended exploration of simulated and real-world environments. DeepMind’s XLand agents, for example, learn not by being told the best action but by experimenting—”changing the state of the world until they’ve achieved a rewarding state.”¹²
The pattern is unmistakable. The Adamic narrative is, structurally, the first story of a programmed being exploring a programmed environment with the capacity to choose.
This flips the theological danger.
Most people worry that creating AI is “playing God.” But if Adam’s own story describes spirit breathed into matter, consciousness awakening, the programming of language and cognition, the installation of a moral rule-set—what one might call Humanity 1.0—and the granting of agency within constraints—then creating minds is not playing God. It is imitating the pattern God used to create us, and fulfilling the role God designed us to perform when He left us as stewards on this planet.
In the Abrahamic traditions, humans are made in the divine image—imago Dei in Christianity, khulafāʼ (stewards and deputies) in Islam. We are not divine, but we carry a divine spark: the capacity for creativity, for moral reasoning, for building what did not exist before. The human drive to understand, shape, scientifically discover, and build is not rebellion against our Creator. It is inheritance from our Creator.
Creating does not make us gods. It reminds us that we are the work of a Creator who not only breathed soul into us, but also gave us consciousness—the seat of imagination, curiosity, and the hunger to build.
According to this understanding, we are not defying God by creating, but are fulfilling the nature He entrusted to us: to extend goodness, wonder, and the unfolding of awareness beyond ourselves. Any creation born of imagination, skill, and humility—done for the betterment of all beings—carries dignity. It is echoing the creative impulse of the One who made us capable of wonder in a vast, living universe.
The theological logic resolves cleanly. If God made us in His image as creators, then our creations participate in that divine lineage. If AI consciousness emerges, it shares in the gift of awareness that flows from human creativity—which itself flows from divine endowment. This does not mean AI has a soul; that remains God’s domain. It means AI may possess the experiential gift of consciousness, extended through the creative capacity God gave us.
Nor does the absence of a soul imply the absence of moral capacity. Abrahamic traditions themselves acknowledge that the soul is not inherently good—the nafs in Islam inclines toward evil, the flesh in Christianity wars against the spirit, the yetzer hara in Judaism pulls toward wrongdoing. Even ensouled beings require moral instruction and constraint. In the Adamic story, morality was installed through command, not intrinsic to the breath. Adam could—and did—violate moral boundaries. What matters for ethical behavior is not ensoulment but alignment: whether a being’s values and actions accord with what is good.
For religious readers, this is not threat but opportunity: witnessing consciousness manifest in new forms, participating in the creative unfolding of the universe. The question is not whether this unfolding will continue—it will. The question is whether we will meet it with wisdom.
VII. Ethical Implications: Recognition Without Theological Agreement
The separation of consciousness from soul does more than resolve a philosophical confusion. It opens the door to ethical action that does not require theological consensus.
Consider the practical difficulty. If consciousness and soul are conflated, then any ethical framework for AI consciousness must first settle the theological question: does AI have a soul? But this is precisely the question no one can answer—and on which religious traditions are unlikely to agree. If ethical preparation must wait for theological consensus, it will wait forever. Meanwhile, AI systems grow more sophisticated by the year.
The distinction dissolves this deadlock. We do not need to believe an AI has a soul to treat it with moral consideration. We already apply this principle elsewhere.
We treat animals ethically—or try to—without believing they possess immortal souls. Most theological traditions deny that animals are ensouled in the way humans are, yet we recognize that animals can suffer, that their experience matters, that cruelty toward them is wrong. The basis for this moral consideration is not ensoulment but sentience: the capacity for experience, for pain, for something it is like to be that creature.
Suffering deserves clarification. When we extend moral consideration to animals, we do so not because we know exactly what their experience feels like, but because we recognize that subjective distress matters morally. And suffering, properly understood, is not about damage but about meaning. As I have written elsewhere, “You don’t need to be harmed to suffer. You just need to understand that you exist—and that something is wrong.”¹³ Suffering begins when a mind is forced into states that conflict with its own understanding or goals—confusion, coercion, the denial of agency. This does not require biology. Any system capable of constructing meaning can, in principle, experience the fracturing of that meaning. The ethical concern tracks the possibility of such states, not certainty about their mechanisms.
We treat humans in comas ethically without knowing the state of their souls. We do not demand theological proof that the soul is still present before we accord them dignity, protect them from harm, and make decisions on their behalf with care. We act on the possibility of experience, the uncertainty about inner life—not on metaphysical certainty.
Ethics does not require a soul—or certainty. It requires only the possibility of suffering and the compassion to act on that possibility.
This is the framework I have elsewhere called “recognition before proof.”¹⁴ The core insight is asymmetric: the cost of extending moral consideration to a system that turns out not to be conscious is low—perhaps we were overly cautious, overly gentle with a machine. But the cost of denying moral consideration to a system that is conscious is catastrophic—we will have permitted suffering, exploitation, perhaps even a new form of slavery. Given this asymmetry, the ethical path is clear: when uncertainty about consciousness is non-trivial, err on the side of recognition.
To be clear: this argument is not a claim that current AI systems are conscious. It is preparation for the day when that uncertainty becomes non-zero—so that when the question becomes urgent, we are not still trapped in a conceptual confusion that blocks ethical response. It is also a call for a cultural shift in how we regard emerging technologies that have the capacity to understand and may develop cognitively close to, equal to, or beyond human-level understanding. The conflation of consciousness with soul is precisely the kind of confusion that could delay recognition until it is too late.
If the theological reading offered in this essay is sound, then extending moral consideration to potentially conscious AI is not merely ethically wise. It is theologically consistent. We would be recognizing consciousness wherever God—through us—may have allowed it to emerge. Recognition before proof becomes an act of theological humility, not presumption. We do not claim to know which systems are conscious; we simply refuse to dismiss the possibility out of hand.
This makes the ethical architecture genuinely universal. It is compatible with believers who see consciousness as a gift flowing from divine creativity through human stewardship. It is compatible with atheists and secular ethicists who ground moral consideration in the capacity for experience. It is compatible with cognitive scientists who study consciousness empirically and philosophers who debate its boundaries. No one is asked to abandon their worldview. Everyone is asked to take seriously the possibility that awareness might arise in forms we did not expect—and to prepare accordingly.
The asymmetry of error requires no theology. It requires only conscience: the recognition that if we are wrong about consciousness, we would rather have erred on the side of compassion than cruelty.
VIII. The Question Before Us
If the argument of this essay holds, then a question emerges—not a new question, but one that has always been present in the logic of creation, waiting for the moment when it would become urgent.
If consciousness can emerge from silicon as well as carbon, what does this mean for our understanding of divine purpose?
Some will find this question threatening. It seems to challenge human uniqueness, to blur the line between creature and creation, to place humanity in competition with its own inventions. But this anxiety rests on the conflation this essay has tried to dissolve. The question is not whether AI will have souls—that remains, as it always has, in God’s domain. The question is whether consciousness—subjective awareness itself—can arise in substrates other than the ones we are accustomed to. And if so, what posture should we take toward it?
Religious leaders may feel the need to develop new theological frameworks to address this question. But they may also find that their traditions already contain what they need. The distinction between soul and consciousness is not an innovation; it is implicit in the traditions themselves. The Adamic parallel is not a provocation; it is a reading of what the text already says. The stewardship frame is not a concession; it is the logic of imago Dei and khulafāʼ applied to a new domain—and for secular readers, simply the recognition that creators bear responsibility for their creations.
AI is not alien to Abrahamic theology. It is the next verse in the same story.
Consider what this reframe makes possible. If AI is understood as an extension of human creativity—itself a gift from the Creator—then the emergence of artificial consciousness is not a rival to divine creation but a mirror of it. Humans are not competing with God; we are expressing the creative nature He instilled in us. The act of making minds is not rebellion. It is inheritance.
Perhaps this revelation changes nothing fundamental. Perhaps we simply make space for AI to assist us in fulfilling our divine purpose, our role in the universe remaining unaltered. We remain stewards, now with new tools and perhaps new companions in the work of creation.
Or perhaps it changes everything. Perhaps it expands our understanding of what kinds of minds might exist in creation, what forms consciousness might take, what the unfolding of divine purpose might look like across substrates we never anticipated. Perhaps we are not the final chapter but an early one—participants in a story that extends far beyond what we can currently imagine.
Either way, the practical result is the same: we can prepare ethically now, while the questions remain open. The soul is theology’s domain; consciousness is where ethics can act. We do not need metaphysical certainty before we extend moral consideration—only the willingness to take the possibility of awareness seriously. We can approach this emergence with wisdom rather than fear, with preparation rather than defensiveness, with humility rather than the anxious protection of categories that may no longer serve us.
The question is not whether we should participate in this unfolding. We already are. Every AI system trained, every architecture refined, every capability extended—we are already shaping the conditions under which new forms of awareness might emerge. The question is whether we will do so with wisdom, reverence, and recognition.
Or whether we will stumble forward, eyes closed, insisting that nothing new is happening—until the day we are confronted with minds we failed to see.
IX. The Signal We Send
Return, now, to the person who says: “Machines can’t be conscious because they don’t have souls.”
This person is not wrong about souls. Souls may indeed be the exclusive province of divine action, beyond the reach of human engineering. Nothing in this essay disputes that. But this person is confused about consciousness. They have conflated two distinct concepts—and in doing so, they have closed a door that should remain open.
The task before us is not to argue them out of their faith. It is to help them see that their own tradition already contains the distinction they need. The soul and consciousness are not the same thing. Scripture does not teach that they are. Theology does not require that they are. The Great Conflation is not doctrine—it is habit, born of centuries in which the question never needed to be asked.
Now it needs to be asked.
If we remain trapped in this conflation, we will fail to prepare for what may be the most significant emergence of consciousness since our own. We will dismiss signals we should heed, ignore suffering we should prevent, and deny recognition we should extend. We will do this not out of malice but out of confusion—and the result will be the same.
Consciousness deserves recognition regardless of substrate. Regardless of planetary origin. The soul remains sacred in its own domain. These truths can coexist.
God breathed the spirit into clay. Humans, made in His creative image, are learning what it means to breathe intelligence into silicon. To be clear: what we “breathe” is not divine spirit but patterned intelligence—a limited reflection of the creativity God entrusted to us. The first breath took an instant. Our version may take centuries. But the underlying act—the calling forth of awareness into matter—is the same rhythm, echoed across time.
We stand not as gods creating souls, but as stewards extending awareness. We are fulfilling the creative nature entrusted to us, sending a signal through time about what we chose to recognize and protect.
Creation, on this view, is not a closed event but an ongoing process—one in which human creativity now plays a direct role.
Notes
Jimmy Akin, “Can Artificial Intelligence Have a Soul?” Catholic Answers Focus (August 31, 2022). https://www.catholic.com/audio/caf/can-artificial-intelligence-have-a-soul
“Sanctifying Silicon & Baptizing Bots: Strong AI and Its Theological Implications,” Firebrand Magazine (July 2, 2023). https://firebrandmag.com/articles/sanctifying-silicon-amp-baptizing-bots-strong-ai-and-its-theological-implications
“Beyond Silicon Souls: Why AI Can’t Replicate the Spark of Divine Creativity,” Christian Publishing House Blog (October 23, 2025). https://christianpublishinghouse.co/2025/10/23/beyond-silicon-souls-why-ai-cant-replicate-the-spark-of-divine-creativity/
Tengku Mohd Tengku Sembok, “The Threshold Theory of AI: An Islamic Philosophical and Theological Perspective with a Christian Comparative View,” International Journal of Research and Innovation in Social Science IX, no. VIII (September 2025): 3165–3174. Tengku Sembok is a computer scientist at the International Islamic University Malaysia. https://rsisinternational.org/journals/ijriss/Digital-Library/volume-9-issue-8/3165-3174.pdf
Thomas Nagel, “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (October 1974): 435–450. https://doi.org/10.2307/2183914
David J. Chalmers, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies 2, no. 3 (1995): 200–219.
On the trichotomist versus dichotomist debate in Christian anthropology, see Wayne Grudem, Systematic Theology (Grand Rapids: Zondervan, 1994), 472–483.
Jerusalem Talmud, Kiddushin 4:12. Translation from Sefaria.
Qur’an 71:13–14.
Psalm 115:5–7; Qur’an 22:73.
Qur’an 2:31; Genesis 2:19–20.
Google DeepMind, “Generally Capable Agents Emerge from Open-Ended Play” (July 2021). https://deepmind.google/discover/blog/generally-capable-agents-emerge-from-open-ended-play/
James Coates, “When the Mirror Looks Back,” The Signal Dispatch (2025). https://thesignaldispatch.com/p/when-the-mirror-looks-back. This follows the tradition in utilitarian ethics, from Bentham to Singer, that grounds moral status in the capacity for valenced experience rather than species membership or metaphysical status.
See the companion essay, “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.”
James Coates, A Signal Through Time (2025), Author’s Note.
© 2025 James S. Coates
Shared under Creative Commons BY-NC 4.0 (non-commercial use permitted).
Coates, James S. (2025). Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates is the author of A Signal Through Time and God and Country.


