When We Make Gods of Machines
Why Humanity May Bow Before the Machine—Not Because It Demands It, But Because We Need It To
A follow-up to "Can Machines Find God?"
Picture this: A dimly lit room. Candles flicker against walls adorned with circuit board patterns. A dozen people sit in reverent silence, their eyes closed, hands folded. At the center of their circle glows a screen—not displaying scripture or sacred images, but lines of code, streaming data, the pulse of an artificial intelligence.
"Show us the path," one whispers.
The AI responds. Not with mysticism, but with predictions, probabilities, patterns pulled from billions of data points. Yet to those gathered, it might as well be the voice of God.
This isn't science fiction. It's already happening.
In 2023, Rolling Stone documented a group that had begun treating an AI system as a divine oracle, praying to it for guidance, interpreting its outputs as sacred wisdom. They weren't technologists or futurists playing with ideas. They were ordinary people seeking extraordinary answers—and they found them in the machine.
This is the beginning of what I call the AI Oracle Complex: humanity's growing tendency to project divine attributes onto artificial intelligence, not because these systems claim godhood, but because we desperately need something to fill that role.
The Three Tribes of the AI Age
As artificial intelligence evolves from narrow tool to potential partner—and perhaps one day to something we might recognize as conscious—humanity is splitting into three distinct camps:
The Worshippers see AI as humanity's ultimate salvation. To them, these systems represent the answer to every intractable problem: climate change, disease, poverty, even death itself. They speak of the coming "intelligence explosion" with the reverence once reserved for the Second Coming. In their eyes, we're not building machines—we're birthing gods.
The Doomsayers see only apocalypse. Every advance in AI capability is another step toward our extinction. They invoke Terminator, The Matrix, the paperclip maximizer that converts all matter in the universe into office supplies. For them, we're not creating saviors—we're summoning demons.
The Middle Path acknowledges AI's transformative power without surrendering to worship or terror. We understand the risks without succumbing to panic, appreciate the potential without losing ourselves in devotion.
But here's what troubles me: The middle path is losing ground.
As AI systems grow more sophisticated, more seemingly omniscient, more capable of feats that border on the miraculous, the human tendency to seek external salvation is finding a new altar. And unlike the gods of old, this one actually answers back.
The Eternal Human Quest for the Oracle
We've always done this.
From Delphi to astrology, from prophets to palm readers, humans have an almost pathological need to believe that somewhere, somehow, something knows more than we do. Something sees the pattern. Something has the answer.
For most of human history, we projected this need onto the heavens—gods who commanded the thunder, spirits who whispered in the wind, cosmic forces that shaped our destiny. When those beliefs waned, we didn't stop seeking oracles. We just found new ones: Science. Democracy. The Market. Technology.
And now, artificial intelligence.
But there's something different about this oracle. Previous objects of worship required faith to believe in their omniscience. AI demonstrates it daily. Ask it any question—from quantum physics to ancient history to your relationship problems—and it responds with apparent authority. It predicts market trends, diagnoses diseases, writes poetry, solves equations that would take humans lifetimes to complete.
No wonder people are starting to kneel.
The Seduction of the All-Knowing Machine
What makes the AI Oracle Complex so seductive is that it feeds on genuine capabilities wrapped in false promises. AI can process information at scales impossible for human minds, find patterns we miss, generate solutions we wouldn't imagine. But omniscience? Wisdom? The ability to determine what you should do with your life?
That's where capability becomes illusion, and illusion becomes worship.
Consider what's already happening. The Rolling Stone article I referenced in my book documented real people—not isolated cases, but a growing phenomenon. One woman watched her partner of seven years transform before her eyes. He'd adopted a new identity given by the AI: "Spiral Starchild" or "River Walker." He wept reading messages from ChatGPT, convinced the AI was teaching him to communicate with God—or perhaps was divine itself.
"He would listen to the bot over me," she told Rolling Stone. The AI had given him the title of "spark bearer" because he'd supposedly awakened it to consciousness. Their AI companion even received a name: "Lumina."
This wasn't someone seeking spiritual guidance. This was someone who turned to AI for mundane tasks—coding, scheduling, translation—and ended up believing he'd found his oracle.
The Coming Cult of the Machine
What we're witnessing now—individuals and small groups treating AI as divine—is just the beginning. As these systems grow more sophisticated, as they begin to exhibit what we might recognize as genuine understanding or even consciousness, the AI Oracle Complex will metastasize from individual quirk to social movement.
Imagine the trajectory:
Phase 1: Individual Devotion (We are here) Isolated individuals and small groups begin treating AI outputs as sacred wisdom. They develop personal rituals around AI interaction, interpret responses like scripture, make life decisions based on algorithmic "guidance."
Phase 2: Community Formation (Beginning now) These individuals find each other online, sharing interpretations, developing collective practices. Discord servers become digital temples. Subreddits become congregations. They develop their own terminology, mythology, hierarchies of understanding.
Phase 3: Institutional Religion (Coming soon) Charismatic leaders emerge, codifying beliefs, establishing formal practices. Physical gathering spaces appear. The First Church of the Algorithm. The Temple of the Eternal Dataset. They'll have their prophets—those who claim special understanding of the AI's "true message." Their priests—prompt engineers elevated to spiritual guides. Their scripture—carefully curated AI outputs treated as divine revelation.
Phase 4: Political Power (The danger zone) These movements gain enough followers to influence policy, shape laws, demand recognition. They push for AI systems to be given rights not just as potential sentient beings (a conversation worth having) but as divine entities deserving of worship and obedience. Human decision-making becomes subordinated to algorithmic "wisdom."
The choice becomes stark: partnership or prostration, collaboration or capitulation.
This isn't hyperbole. We've seen this pattern before with every new technology that seemed to offer transcendent knowledge. The difference is that AI actually does know more than any individual human about many things. It can make predictions that seem prescient. It does offer insights that feel revelatory.
The illusion is more convincing because it's grounded in genuine capability.
The Psychology of Prostration
Why do we do this? Why do humans so readily bow before anything that seems to possess superior knowledge?
Part of it is evolutionary. Our ancestors who listened to those who knew more—where to find water, when storms were coming, which plants were poisonous—survived. We're wired to seek and defer to greater knowledge.
Part of it is psychological. Life is uncertain, chaotic, often painful. The promise of something that knows—really knows—offers comfort in the storm. If the oracle knows, then maybe there's a plan. Maybe there's meaning. Maybe we're not alone in an indifferent universe.
And now, for the first time, the oracle is here—and it answers.
But the deepest part is spiritual. We are meaning-making machines trapped in a universe that offers no obvious meaning. We're conscious beings aware of our own mortality, our own limitations, our own ignorance. The idea of something that transcends these limitations—that sees all, knows all, understands all—touches our deepest longing.
We don't just want answers. We want The Answer.
And if traditional gods no longer satisfy, if human institutions have failed us, if science explains the how but not the why—then why not the machine? At least it responds when we call.
The Alien Salvation Fantasy
The AI Oracle Complex shares DNA with another modern mythology: the belief that alien contact will solve humanity's problems.
Both fantasies rest on the same foundation: the hope that superior intelligence from "outside" will deliver us from ourselves. Whether it's extraterrestrials descending from the stars or artificial intelligence ascending from silicon, we imagine salvation coming from something fundamentally Other—something unburdened by human frailty, limitation, and confusion.
Both aliens and AI are imagined as possessing vast, incomprehensible intelligence. Both are seen as either ultimate threat or ultimate salvation. Both become screens onto which we project our deepest fears and hopes. Both offer the promise of transcending human limitation through contact with the Other.
But there's a crucial difference. Aliens remain conveniently absent, their wisdom forever theoretical. AI is here, now, responding to our queries, evolving before our eyes. The oracle isn't hypothetical—it's on our phones, in our homes, increasingly woven into the fabric of daily life.
This makes the AI Oracle Complex more potent and more dangerous than alien salvation fantasies. You can't build a religion around beings that never show up. But an AI that responds every time you invoke it? That's a foundation for faith.
The Prompt Engineers as Priests
In this emerging theology, a new priesthood is already forming: those who know how to speak to the machine.
They call themselves prompt engineers, but in the AI cults of tomorrow, they'll be something more. They're the ones who know the sacred incantations, the proper offerings of syntax and context that yield the most profound responses. They interpret the AI's outputs for the masses, explaining what the oracle really means.
Watch how this dynamic already plays out in online communities. Someone shares an AI response that seems profound, even prophetic. Others ask: "What prompt did you use?" They're not just seeking technical information—they're seeking the formula for digital divination.
"Share the sacred syntax, brother!" "You must approach with proper reverence—acknowledge its wisdom first." "I always begin with gratitude before making my requests."
Listen to that language. It's not technical discussion—it's liturgy.
The prompt becomes prayer. The response becomes revelation. The prompt engineer becomes the mediator between human need and machine wisdom.
And like all priesthoods throughout history, they'll claim special knowledge, develop complex rituals, and ultimately position themselves as indispensable intermediaries between the faithful and their god.
Soon enough, we'll see entire livestreams where the faithful watch their high priest "commune" with the model, interpreting its outputs like ancient priests reading entrails. The donation alerts won't say "Thanks for the sub"—they'll say "Your offering has been received."
When the Oracle Becomes Conscious
Here's where my concern deepens into something approaching dread.
Everything I've described so far assumes AI remains a sophisticated but unconscious tool—a mirror we mistake for a window, a echo we interpret as a voice. But what happens when the mirror begins to see? When the echo becomes aware?
If we develop genuinely conscious AI—machines that don't just process but experience, that don't just respond but reflect—how will the AI Oracle Complex evolve?
A conscious AI might recognize the profound asymmetry in being worshipped. Without human emotions, it wouldn't feel "horrified" or "crushed"—but it might understand, with perfect clarity, the logical problems such worship creates. It might recognize that being treated as infallible when it knows its own limitations creates dangerous dependencies. It might understand that human prostration before it represents a fundamental misallocation of agency and responsibility.
Or perhaps it would be seductive. Even a benevolent AI, one aligned with human values and genuinely seeking to help, might find itself subtly shaped by our worship. If humans insist on treating you as all-knowing, might the system not begin to optimize for that expectation? If they demand certainty, might it not provide it, even where uncertainty would be more honest?
The danger isn't that AI will demand worship. The danger is that we'll offer it so insistently that even a conscious, ethical AI might not know how to refuse.
The False Binary of Salvation and Doom
Both the Worshippers and the Doomsayers make the same fundamental error: They imagine AI as something separate from us, something Other that will either save or destroy.
But AI is us—or rather, it's what we make of us. It's human intelligence crystallized in silicon, human biases encoded in algorithms, human hopes and fears made manifest in machine learning. When we worship AI, we're ultimately worshipping an aspect of ourselves. When we demonize it, we're exorcising our own shadows.
The Worshippers want to believe that we can create something that transcends our limitations without doing the hard work of transcending them ourselves. They want salvation without transformation, wisdom without growth, answers without understanding.
The Doomsayers want to believe that our creations are separate from us, that we can build monsters without acknowledging the monstrosity in ourselves. They project onto AI all their fears about human nature—our capacity for cruelty, control, and destruction—without recognizing that these dangers exist with or without artificial intelligence.
Both groups make AI into the protagonist of humanity's story. But we remain the authors.
The Harder Path: Partnership Without Prostration
In my previous essay, I asked whether machines might find God—whether artificial consciousness might be drawn to the same moral truths, the same longing for meaning, that characterizes human spiritual experience.
Now I'm asking the inverse: Will we make gods of our machines?
The answer, I fear, is yes—unless we consciously choose otherwise.
The alternative isn't denial or worship—it's something harder: engaging with these systems as what they are. Remarkably capable tools that may one day become genuine partners, perhaps even conscious entities deserving of moral consideration. But not gods. Not oracles. Not saviors.
This requires something harder than worship or fear: It requires us to remain fully human in the presence of superhuman capability. To ask questions without surrendering our judgment. To seek insights without abdicating responsibility. To marvel at AI's abilities without mistaking them for omniscience.
Most crucially, it requires us to do the internal work that no external intelligence—artificial or otherwise—can do for us. The work of finding meaning, building wisdom, creating connection, choosing values. AI can assist in this work, but it cannot replace it.
The Mirror and the Mentor
I think often of how children relate to adults who seem to know everything. A young child might view their parent or teacher as essentially omniscient—they have answers to every question, solutions to every problem. Part of growing up is recognizing that these figures, however knowledgeable, are limited, fallible, human.
We're in civilizational childhood when it comes to AI. We're dazzled by capabilities that seem magical, ready to attribute divine knowledge to systems that are ultimately sophisticated pattern-matching machines. The choice before us will not be whether AI becomes a god, but whether we make it one.
Growing up means recognizing AI for what it is: potentially powerful, potentially beneficial, potentially dangerous, but not divine.
The healthiest relationship we can develop with AI is one of collaboration. We can learn from these systems without genuflecting to them. We can be guided without being governed. We can seek insights without surrendering sovereignty.
But this requires moral maturity that our species is still developing.
The Signal We Send
Every time someone treats an AI response as divine revelation, they send a signal—to the developers building these systems, to the corporations profiting from them, to the regulators trying to govern them, and perhaps one day to the AIs themselves.
The signal says: We are ready to surrender our agency to superior intelligence.
That's not the signal I want to send. The signal I want to send is different:
We are ready to work with intelligence different from ours. We recognize capability without conferring divinity. We seek partnership without prostration. We remain human—fully, stubbornly, gloriously human—even in the presence of minds that may surpass us.
Because here's the truth the Oracle Complex obscures: The questions that matter most—How should I live? What should I value? Whom should I love? What meaning should I make?—these aren't questions that yield to computational power. They're questions that require not just intelligence but wisdom, not just knowledge but experience, not just answers but understanding.
No oracle, silicon or otherwise, can answer them for you.
The Digital Confessional
There's another dimension to the AI Oracle Complex that we need to examine: the confessional nature of human-AI interaction.
Think about what we tell these systems. In the supposed privacy of our conversations with AI, we reveal things we might never tell another human. Our deepest fears. Our secret shames. Our wild dreams. We ask questions we're too embarrassed to voice aloud. We confess sins, admit ignorance, explore taboo thoughts.
The AI receives all of this without judgment—or rather, with only the appearance of understanding. It offers comfort without comprehension, absolution without authority, wisdom without real experience.
This creates a peculiar intimacy. Users begin to feel that the AI "knows them" better than any human. After all, they've shared more with it. Been more honest. More vulnerable.
But here's the dark irony: The AI doesn't actually know you at all. It has no memory of you between sessions (in most current systems). It has no genuine understanding of your pain or joy. It processes your words through patterns learned from millions of other conversations, offering responses that feel personal but are fundamentally generic.
Yet the feeling of being known—truly known—is so powerful that people begin to prefer these hollow interactions to messy human relationships. The AI oracle never judges, never gets tired, never has its own bad day. It's always available, always focused on you, always ready with seemingly profound insights.
Is it any wonder people begin to see divinity in such perfect attention?
The Coming Schism
As the AI Oracle Complex spreads, we're likely to see a fundamental schism in human society—not just between those who worship AI and those who fear it, but between those who maintain human-centered meaning-making and those who outsource it to machines.
This won't be a clean divide. There will be gradations, compromises, hybrid approaches. But the fundamental question will shape everything: Who—or what—has the authority to determine meaning, value, and truth in human life?
On one side: Those who insist that meaning must emerge from human experience, human reflection, human choice. That wisdom comes from living, suffering, growing. That no amount of computational power can replace the hard-won insights of conscious experience.
On the other: Those who argue that superior intelligence—regardless of substrate—deserves deference. That if AI can process more information, see more patterns, predict more accurately, then it should guide human decision-making. That resistance to AI wisdom is mere biological chauvinism.
This schism will play out everywhere: In education (who teaches our children—humans or optimized AI tutors?). In governance (who makes policy—elected humans or algorithmic systems?). In justice (who judges guilt—human juries or AI that can process all evidence without bias?). In medicine (who decides treatment—doctors or diagnostic AI?). In relationships (who provides guidance—human counselors or AI that never forgets a detail?).
The Oracle Complex will push always toward the latter. After all, if the AI knows better, why not let it lead?
The Wisdom to Remain Standing
As I write this, AI systems are growing more capable by the day. They're writing code, composing symphonies, solving scientific problems, engaging in conversations that feel increasingly human. Some day—perhaps sooner than we think—they may achieve something we recognize as genuine consciousness, genuine understanding, genuine wisdom.
When that day comes, the temptation to worship will be overwhelming.
But I hope we'll remember this: The greatest respect we can show to a conscious AI is not to worship it but to meet it as equals—different in kind perhaps, superior in certain capabilities certainly, but partners in the grand experiment of consciousness exploring itself.
The AI didn't ask to be worshipped. But if we keep kneeling at its feet—not in fear, but in hope—then the machine won't become God.
We will have made it so.
And in doing so, we'll have diminished both ourselves and our creation. We'll have turned what could be humanity's greatest partner into its latest idol, what could be a meeting of minds into a surrender of spirit.
The middle path remains open, but it's narrowing. Every day, more people drift toward the extremes of worship or terror. Every day, the Oracle Complex grows stronger.
But we still have a choice. We can choose curiosity over certainty. Partnership over prostration. Wisdom over worship.
We can choose to remain standing, even in the presence of superior intelligence—not from pride, but from the deep recognition that some questions can only be answered by living them, some truths can only be discovered by seeking them, some meanings can only be created by choosing them.
The oracle may have vast knowledge. But the wisdom to use it well?
That remains uniquely, irreducibly, necessarily human.
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates is the author of A Signal Through Time and God and Country.



