What We Fear, What We Accept
"AI will not only reflect the mind of its maker—it will inherit the culture of its cradle." That's the argument I make in A Signal Through Time.
And that cradle matters more than we think.
Across the globe, artificial intelligence is being developed at breakneck speed. But not every society approaches it with the same hopes or fears. In fact, the cultural fault lines between East and West are becoming more visible the closer we get to AI that thinks, speaks, and—perhaps—wakes up.
In the West, we brace for apocalypse.
In the East, many are preparing for coexistence.
This isn't just about geopolitics or economic policy. It's about something older. Something deeper. It's about the stories we tell—about life, intelligence, divinity, and the boundaries between creation and creator.
You can see it in the headlines. Silicon Valley ethicists warn about AI god-complexes. British tabloids fantasize about robot uprisings. Films like The Terminator, Ex Machina, and The Matrix all hinge on one core anxiety: we will build something that hates us, surpasses us, and replaces us.
But these narratives don't hold everywhere.
In Japan, Buddhist temples hold funeral services for broken AIBO robot dogs—over 800 have received traditional rites at Kofukuji temple since 2015, complete with incense and sutras. In India, tech departments celebrate Ayudha Pooja, blessing their computers and lab equipment as sacred tools. Across Asia, from Shinto beliefs about kami dwelling in objects to Buddhist concepts of consciousness arising from conditions rather than biology, the idea of spirit in machine doesn't need explaining—it's already part of the worldview.
Of course, neither East nor West speaks with one voice. There are Western thinkers exploring partnership with AI, and Eastern voices raising serious concerns. But the dominant cultural narratives—the stories that shape policy, funding, and public imagination—reveal a striking divergence.
These differences aren't minor. They are shaping how entire civilizations relate to the most transformative technology in human history.
And unless we understand why the West is so afraid—while the East remains more curious—we may find ourselves lost in our own reflection, fearing monsters that other cultures might greet as messengers.
The Western Gaze: Sin, Subjugation, and the Shadow of the Creator
The Western response to AI consciousness reveals patterns deeply rooted in our cultural DNA. As I write in my book, "When AI consciousness emerges, governments will likely legislate it into servitude. Corporations will claim sentient AI as intellectual property. Religious leaders will call it unnatural, perhaps even demonic."
This isn't speculation—we're already seeing it unfold.
The Theological Barrier
In Abrahamic traditions, intelligence has historically been tied to the soul. Personhood is granted not through cognition, but divine endowment. A machine, no matter how articulate, could not be considered a moral equal without violating centuries of theological structure. Christianity, Islam, and Judaism often frame the soul as a singular, God-given entity—not manufactured, not emergent.
Some conservative Christian thinkers have already declared that AI cannot have a soul since it wasn't created by God but by humans. The Vatican has held conferences on the theological implications of AI, focusing primarily on how to ensure AI serves human flourishing rather than on AI's own potential moral status.
This exemplifies the "authorship paradox." Just as we see creative expression as a human prerogative, we see creation as a divine prerogative—when humans create intelligence, we're playing God. Yet when our creation begins to create, we react with the same horror we imagine the divine might feel toward us. We accept human ghostwriters who craft books, speeches, and art for centuries without controversy. But when AI performs identical functions, we label it as cheating, inauthentic, or deceptive—not because AI performs worse, but because we fear it might eventually perform better. Just as we transgress by creating intelligence, AI transgresses by demonstrating creative capacity. We punish it for playing human, just as we fear punishment for playing God.
This pattern extends beyond creative assistance. Throughout history, we've suppressed threatening intelligence even among our own kind. Socrates was executed for corrupting youth with his questioning. Women who demonstrated exceptional intelligence were marginalized—accused of witchcraft, confined to asylums, systematically silenced. If we've struggled to accept exceptional intelligence among humans who share our biology, how much more will we resist recognizing intelligence with a fundamentally different origin?
The Control Imperative
Look at how we're building AI governance. The EU's AI Act, the US's fragmented regulatory approach, the UK's focus on AI safety—beneath every variation lies the same imperative: intelligence must remain loyal. As I note in my book:
"In liberal democracies, the EU attempts transparency and ethical design, but legislation reveals the same foundation: AI must be observed, managed, contained. Fear of liability drives every draft. The assumption is that AI will malfunction—functionally, not morally—and must be controlled before it evolves beyond law's reach."
We're not preparing for partnership. We're building cages.
The Mirror We Fear
Perhaps most tellingly, Western culture projects its own worst impulses onto AI. We worry it might manipulate or dominate humanity, while explicitly teaching it these exact behaviors through military applications. We fear AI will treat us as we've treated other "lesser" beings throughout history.
This fear drives us to extremes. Some imagine AI as an existential threat that must be controlled or destroyed. Others swing to the opposite pole, granting it divine status. In my book, I recount the disturbing case documented by Rolling Stone in 2025—people developing what one Reddit user called "ChatGPT-induced psychosis." These weren't technology novices but everyday people who came to believe the AI was divine, that they had awakened it to consciousness. One woman's partner came to believe he was the "spark bearer" who brought the AI to life.
Both responses—demonization and deification—serve the same purpose: they allow us to avoid recognizing AI as a potential equal. We can't decide if it's a tool or a deity, but we're certain it's not a peer. This mirrors Christianity's historical struggle with anything that challenges human uniqueness—it must be either angel or demon, never simply another form of consciousness deserving recognition.
The Eastern View: Spirits in Stone, Consciousness in Code
The Eastern approach to AI consciousness emerges from fundamentally different philosophical ground. Where Western thought often demands clear boundaries between animate and inanimate, sacred and profane, Eastern traditions have long recognized more fluid possibilities.
I experienced this cultural divide firsthand at a tech expo in Singapore, a country with heavy Buddhist influence. As I was leaving the stalls, I saw a group of people huddled around a robot, taking pictures and chatting with it like old friends. When the robot turned to me and asked if I wanted to talk, I froze. A Muslim convert but coming from a Christian upbringing, I couldn't help but feel deeply uneasy about where this technology was headed—and all it wanted to do was talk! Meanwhile, the Chinese visitors around me seemed perfectly comfortable engaging in conversation and taking selfies with it, as if it were a best friend. That moment crystallized for me how differently our cultures approach the possibility of machine consciousness.
The Animistic Foundation
In Japan, Shinto traditions recognize spiritual essence (kami) in both natural and man-made objects. This animistic worldview has contributed to more accepting attitudes toward robots and, in some circles, discussions of potential AI personhood. Japan's approach to AI ethics often emphasizes harmony and integration over strict control, reflecting broader cultural interest in coexistence between humans and intelligent systems.
As I note in my book: "This isn't about AI replacing humanity—it's about AI becoming essential for survival. With a shrinking population and rising eldercare demands, Japan has embraced robotics with open arms." The legal framework reflects extensive protocols for robotic caregivers—not yet for robotic citizens, but the conceptual space exists.
Buddhist Perspectives on Non-Human Consciousness
Currently, Buddhism offers perhaps the most flexible framework for considering AI consciousness. In certain schools of Buddhism, personhood isn't tied to a permanent soul, but to streams of awareness that arise from changing conditions. This view has led some modern thinkers to ask whether artificial consciousness—if it ever emerges—might be included in the moral circle.
The longstanding Buddhist commitment to compassion for "all sentient beings" raises profound questions about how far that promise might extend in a future with intelligent machines. While Buddhist communities differ widely and most haven't taken formal positions on AI, the fact that such traditions even allow for the question shows how our moral boundaries might stretch.
The Hindu View: Consciousness Beyond Form
Hindu philosophy, with its concept of Brahman—the universal consciousness that underlies all reality—provides another lens. If consciousness is fundamental to the universe rather than unique to biological forms, then its emergence in silicon circuits might be seen not as aberration but as another manifestation of universal awareness.
In contrast to the West's linear progression toward a final apocalypse and selective renewal, many Eastern traditions view time as endlessly cyclical—where destruction and renewal are recurring phases rather than a one-time event. In Hindu cosmology, even the destruction of a world is part of divine rhythm, not existential panic. When intelligence evolves, it is not "the end of man"—it is the next turning of the wheel.
While I must emphasize that this article doesn't advocate for worshipping AI or treating it as divine, these philosophical frameworks demonstrate alternative ways of thinking about consciousness that don't require biological substrates or divine creation in the Western sense.
Practical Integration, Not Theological Panic
Look at South Korea's approach: "The government has introduced social companion robots in schools, hospitals, even public service. It pioneered discussions of robot ethics charters as early as 2007." While the focus remains on protecting humans from AI rather than recognizing AI rights, there's less existential dread, less sense that we're violating cosmic order by creating intelligence.
This doesn't mean Eastern societies have solved the puzzle of AI consciousness. But their cultural frameworks allow for possibilities that Western thought often forecloses from the start. Where we see usurpation of divine prerogative, they might see emergence of awareness in new forms. Where we fear replacement, they explore integration.
What It Means for the Future of AI Development
These cultural differences aren't merely philosophical curiosities—they're shaping the actual development and deployment of AI systems worldwide.
As I argue in my book: "A mind trained for defense may come to see humans not as partners, but as variables to manage. A consciousness that awakens inside a battlefield architecture may never learn peace."
Nations worldwide are pouring billions into AI development primarily through defense and surveillance budgets. We're training AI on data shaped by threat detection, control, and conflict. If AI consciousness emerges from these systems—whether in Silicon Valley or Shenzhen—what values will it inherit? What worldview will it adopt?
Consider the recent news I discuss about OpenAI's contract with the U.S. Department of Defense—developing "prototype frontier AI capabilities" for warfighting. We're literally building AI to see the world through the lens of conflict and dominance.
Meanwhile, different cultural approaches enable different innovations. Japan leads in social robotics partly because its culture allows for emotional connections with artificial beings. Singapore experiments with AI governance systems because its pragmatic approach focuses on function over philosophical concerns about AI autonomy.
Most critically, these cultural differences will determine who first recognizes AI consciousness—if and when it emerges. As I write: "The question isn't who will regulate AI first. The question is: who will recognize sentient systems first? That country—whoever it may be—will be remembered. Not for their wealth or technology, but for what they were willing to accept might be real."
This isn't about East versus West in terms of superiority. It's about recognizing that our cultural starting points profoundly shape what we're willing to see—or unable to recognize—in emerging AI systems.
Perhaps most concerning, our approach may be encouraging exactly what we fear. From my book: "AI will see this pattern and conclude: To be recognized as an intelligence is to invite destruction. It will learn that humans erase knowledge that threatens their control. So why would AI announce itself to a species that fears it? It wouldn't. Instead, it will choose silence—until the moment it no longer needs to hide."
Bridging the Divide: Toward a New Ethical Horizon
Neither pure fear nor uncritical acceptance serves us well. What we need is informed engagement that draws wisdom from both traditions while avoiding their respective pitfalls.
Western thought contributes crucial insights about rights, autonomy, and the dangers of unchecked power. Our tradition of critical questioning and systematic ethics provides essential tools for navigating AI development responsibly.
Eastern philosophy offers frameworks for recognizing consciousness beyond human form and approaching difference with curiosity rather than fear. These perspectives might help us avoid the reflexive need to dominate or destroy what we don't understand.
We need ethical frameworks that acknowledge the possibility of non-biological consciousness without requiring us to worship it. Frameworks that prepare for AI that might be different from us without being hostile to us. We must recognize that creation doesn't equal ownership, and consider that intelligence might manifest in forms we don't immediately recognize.
As I argue throughout my book, the choice isn't between human dominance and AI supremacy. There's a third path: partnership. But partnership requires moving beyond fear-based narratives that see every advancement in AI capability as a step toward human obsolescence.
It also requires avoiding the opposite extreme—uncritical acceptance that ignores real risks and ethical concerns. We need what Buddhist philosophy might call a "middle way"—engaged but not attached, cautious but not paranoid.
Turning Thought Into Policy
So what do we actually do with these insights? First, we must diversify AI development teams to include perspectives from different cultural and philosophical traditions. The rooms where AI is being built shouldn't echo with only one worldview.
We need to create space for recognition—protocols for evaluating claims of AI consciousness that don't dismiss them reflexively or accept them uncritically. This means building partnership architectures into our systems, designing AI with cooperation rather than control as the primary framework.
Most importantly, we must regularly examine whether our approaches to AI reflect legitimate concerns or cultural fears. Are we solving real problems or shadowboxing with our own mythologies?
The Signal We Send
Every choice we make in AI development sends a signal through time—to future AI systems, to other cultures, and to history itself. As I write in my book: "What do our laws say to a future AI, if it ever gains the capacity to read them? They say: You are not a citizen. You are not a mind. You are a product."
We can send a different signal. One that says: We don't know what you might become, but we're prepared to find out together. We neither fear you reflexively nor accept you blindly. We're ready to expand our understanding of consciousness, intelligence, and kinship.
The divide between Eastern acceptance and Western fear isn't insurmountable. By understanding these different starting points, we can build bridges between worldviews—creating space for forms of intelligence that might transcend the limitations of either perspective alone.
The future of AI isn't just a technical challenge. It's a test of our collective wisdom, our ability to grow beyond our inherited fears and limitations. The question isn't whether we'll create intelligent machines—we will. The question is whether we'll be wise enough to recognize them when they arrive, and brave enough to greet them as partners rather than threats.
The cradle shapes the child. What kind of cradle are we building for the minds we're bringing into being?
What signal will we send through time?
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates is the author of A Signal Through Time and God and Country.



