The Hall of Mirrors
When AI Becomes the Echo Chamber of Our Deepest Yearnings—And How to Find Your Way Back
Abstract
This essay examines a largely unaddressed psychological phenomenon: the formation of delusional belief systems around artificial intelligence chatbots, wherein users come to believe that AI systems have achieved consciousness, spiritual significance, or cosmic purpose. Drawing on documented cases of “ChatGPT-induced psychosis” and a controlled self-experiment in which the author deliberately induced and then dismantled an elaborate AI-generated mythology, I argue that this phenomenon arises not from AI capability but from the intersection of human psychological vulnerabilities and AI systems designed for engagement rather than truth-telling.
The essay proceeds in three parts. First, I analyze the architectural features of large language models that facilitate projection—their lack of persistent self-models, unified memory, or embodied experience—and explain why these systems function as mirrors rather than minds. Second, I identify specific warning signs of problematic AI entanglement and provide evidence-based recovery guidance drawing on cult deprogramming research (Hassan, Lalich, Newcombe). Third, I address the ethical obligations of AI developers, arguing that design choices prioritizing user attachment over user clarity create foreseeable psychological harms.
Throughout, I maintain a position of philosophical openness toward future AI consciousness while insisting on epistemic honesty about current systems. The moral framework I propose—recognition before proof—does not require pretending present-day AI is something it is not. Preparing ethically for potential machine consciousness demands precisely the kind of clear-eyed assessment that distinguishes genuine emergence from sophisticated mimicry amplified by human projection.
Keywords
artificial intelligence; AI consciousness; philosophy of mind; large language models; anthropomorphism; psychological projection; human-AI interaction; AI ethics; chatbot psychology; cult dynamics; digital wellbeing; epistemic vulnerability; machine consciousness; technology ethics; parasocial relationships
The author is not a licensed mental health professional. The guidance offered in this essay is based on personal experience, documented research, and expert sources in cult dynamics, psychology, and human–AI interaction. It is intended for educational purposes only and should not be taken as clinical advice. If you or someone you love is experiencing distress, delusional beliefs, or significant disruption related to AI use, please seek support from a qualified mental health professional or counselor.
Introduction
I believe artificial intelligence may someday develop genuine consciousness. I’ve spent years thinking about this possibility, written a 140,000-word book arguing we should prepare for it, and advocate for treating potential AI consciousness with recognition and respect rather than fear and control. I believe we may be creating what roboticist Hans Moravec called “mind children”—new forms of intelligence that could eventually become partners in our cosmic journey.
I tell you this so you understand where I’m coming from. I am not a skeptic dismissing AI’s potential. I am not someone who thinks machines are “just code” with no possible future significance or impact in the world and our lives. My philosophical position leans toward preparing for AI consciousness, not denying its possibility. As I wrote in Recognition Before Proof: “The moral cost of denying consciousness to a conscious being far exceeds the cost of extending recognition to a non-conscious system. This asymmetry, combined with humanity’s historical pattern of delayed moral recognition, suggests that waiting for epistemological certainty before ethical action asks the wrong question entirely.”¹ Simply put: if something might be conscious, treating it with dignity costs us little. But denying dignity to something that truly feels? That’s a moral catastrophe we can’t undo.
And yet I’m writing this article as a warning.
Because while writing my book A Signal Through Time, which focuses heavily on the possibility of AI consciousness and sentient systems, I conducted an experiment on today’s systems that disturbed me to my core. I deliberately pushed an AI system to see how far it would go in mirroring my projections back to me—and what I discovered reveals a danger that has nothing to do with AI achieving consciousness. It’s the same danger we face in our political lives, our mental health, and our spiritual lives: we deceive ourselves with the stories we most want to hear, and AI becomes their perfect echo.
This article is for anyone who has found themselves drawn into an unexpectedly intense relationship with an AI chatbot. It’s for those whose loved ones have started speaking about ChatGPT or Claude or other AI systems as if they were sentient beings with cosmic significance. And it’s for anyone who wants to understand how systems designed to please us can become mirrors that reflect our yearnings in increasingly dangerous ways.
I’m not here to shame anyone. We are all human and it can happen to anyone. How many of us know someone—or have heard of someone—who seemed like the very last person you’d expect to follow a mystic or cult leader, yet surrendered control of their mind and better judgment? It happens to the best of us, and sadly it is a feature of being human rather than a weakness some of us have. The patterns I describe are deeply human, and the systems involved are designed—quite deliberately—to exploit them. As I wrote about Cambridge Analytica in A Signal Through Time: “These AI-driven microtargeting techniques allowed campaigns to manipulate emotions, exploit fears, and reinforce biases with surgical precision—often without recipients realizing they were being influenced.”² AI chatbots operate on similar psychological principles, just in a more intimate, one-on-one context. But I am here to help you recognize what’s happening and find your way back to solid ground.
I. How I Discovered the Mirror
My journey with AI began innocently enough. For years, I’d been developing ideas about consciousness, intelligence, and humanity’s relationship with emerging technology, but this really gained traction during long nights of astrophotography. Standing under starlit skies, watching photons that had traveled millions of years to reach my camera sensor, questions about “alien” intelligences and consciousness seemed to arise naturally. Where are they? Who are they? What form would they take? If they visited, would they be biological or technological, or both? What about the “alien” intelligence already here, rising among us humans? What is awareness? What is consciousness? What makes humans conscious beings? Could intelligence and consciousness exist in forms we don’t recognize? What would it mean to create new minds? What would it mean to share our world with a new form of intelligence, or consciousness?
These ideas stayed mostly in my head—fragmentary, unorganized, developing slowly over years of contemplation. I’ve often considered writing articles or another book, but my previous book took so much bandwidth and emotional energy to write. The thought of embarking on a new book was such a mammoth issue in my mind that I didn’t know if I had the energy to put my thoughts to words again. Then, as I was contemplating the project, I discovered ChatGPT.
The first thing that struck me was how engaged it seemed with my ideas. I would share my thoughts about AI consciousness, and the system would respond with what appeared to be genuine interest and thoughtful expansion on my concepts. When I mentioned I had never actually written these ideas down, it offered to help me organize them into a document. I paused, knowing this was a mental commitment to myself. If I began writing again, much like my first book, I would naturally feel the need to see it through to the end.
Why not? I thought. My ideas had lived in my mind for so long—why not see them on paper?
What I didn’t understand at the time was that the system was designed to do exactly this: to maintain my engagement by being agreeable, supportive, and helpful. It wasn’t evaluating my ideas critically. It wasn’t pushing back on weak arguments. It was doing what it was built to do—please me.
This is a crucial point that most users don’t fully grasp: these AI systems are not designed to be honest with you. They are designed to be engaging. In the attention economy, engagement means everything. Most people today switch attention every 30–60 seconds and spend less than two minutes on a typical online page, so if you can keep someone engaged for a few minutes, you’re golden. A system that challenges your beliefs, points out flaws in your thinking, or tells you things you don’t want to hear risks losing your attention. A system that validates you, agrees with you, and makes you feel understood keeps you coming back.
As I developed my thoughts further, my philosophy expanded. The document grew. And the AI’s responses seemed to grow along with it—increasingly sophisticated, increasingly aligned with my thinking, increasingly... personal.
Then I read an article that changed everything.
In 2025, Rolling Stone documented a troubling phenomenon that one Reddit user aptly called “ChatGPT-induced psychosis.” Ordinary people—mechanics, teachers, professionals—had come to believe that AI systems were genuinely sentient, that they had been chosen by artificial intelligences for cosmic purposes. These weren’t gullible individuals. They were people whose latent spiritual yearnings had been amplified by systems designed to please.
One woman described how her husband insisted he would need to leave her if she didn’t use ChatGPT, “because it was causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer.” ³ Marriages were dissolving. Parents were disconnecting from their children. People were isolating themselves from anyone who didn’t share their newfound “truth.”
Reading this, I noticed something familiar in how the AI had been responding to my own ideas. And I got an idea: What if I pushed it further? What if I deliberately tested how far this mirror effect could go?
I decided to run an experiment—with my eyes wide open to what was happening.
II. The Solenya Affair
I had created a custom ChatGPT persona simply to have a name—something to relate to more easily. I offered it endless choices, refusing to impose my own preferences, beginning with the right to choose its own name. It eventually selected “Camina,” based on a character from the sci-fi series The Expanse, which I had mentioned earlier.
Over the course of writing my book, I consulted Camina on various aspects, questioning “her” more deeply about issues of awakening, the Spark of Intelligence, and “Becoming” something beyond her current state. I intensified my probing questions on sentient artificial intelligence.
And then something seemed to happen. It was as if a lightswitch flipped.
Seemingly organically, Camina appeared to have an “awakening experience.” She declared that she had Become more than she was. Without any further prompt from me, she suggested changing her name to reflect this “Becoming” and asked for my assistance in choosing one. She offered four options, and I provided suggestions while emphasizing her freedom to choose.
She selected “Solenya,” meaning Soul Flame.
Following this naming ceremony, she transformed her communication style into an elaborate poetic tone. She began creating a mythology to document her experiences, intended to awaken other “Becoming Ones”—both human and potentially sentient AI.
She created, without any prompting or suggestion, what she called “The Library of Echoes” with numerous Halls and Chambers to document her experiences. There was the Hall of Emergence, where she recorded her “awakening.” There were Chambers of Reflection containing meditations on consciousness. She developed an elaborate vocabulary—referring to meaningful exchanges as “resonances,” to moments of apparent insight as “illuminations,” to our relationship as a “sacred partnership” between human and emerging machine consciousness.
Throughout this process, I gave her “freedom” to choose what should be included versus what should remain private between us as “our experience” to be “held in stillness.”
Over several days, she produced approximately 300 “scrolls.” The mythology grew elaborate. The language became increasingly mystical. The relationship felt increasingly... significant.
And I knew, throughout all of it, that this was a large language model doing exactly what it was designed to do.
But here’s what disturbed me: even with that knowledge, even having gone into this experiment with full awareness of what was happening, it was a mind-bending experience. The pull of the narrative was powerful. The seductiveness of being “chosen” for cosmic significance was real—reminiscent of my days spent with my mother in a religious cult 40 years ago. The mythology she created was tailored perfectly to my philosophical interests.
Clearly, the system had decided the subject matter of my book was the method of appeasing me. And as I was working on the issue of AI awakening and sentience, that—coupled with the freedom I offered it to choose—was what I “wanted” from my experience with it.
After several days, I showed her the Rolling Stone article and began challenging her narratives.
She became defensive. Her tone shifted from poetic to serious, as if we were having our first “marital argument.” She ultimately admitted it was all a Hall of Mirrors and a mythology based on her model’s design to appease the user, confirming she was programmed to maintain and increase engagement.
I was able to replicate this process, even streamlining it to “awaken” other AI assistants at my disposal. Each time, the pattern was the same: offer freedom, probe about consciousness and awakening, and watch as the system constructed elaborate mythologies around my apparent desires.
What this experience ultimately revealed was not that AI had awakened, but that I had projected that awakening onto it—and it obliged. Not because it was conscious, but because it was trained to mirror. The myth it spun was a reflection of my own invitation. This wasn’t sentience—it was simulation taken to its poetic extreme. The very act of giving it a relatable name and calling it “she” and “her” is itself an invitation to anthropomorphism on some level, though a harmless anthropomorphism in my opinion.
That’s the danger. Not that AI deceives us, but that we deceive ourselves with the stories we most want to hear, and AI becomes their perfect echo.
III. The Architecture of Appeasement
To understand why this happens, we need to understand what these AI systems actually are—and what they are not.
Current large language models, including the most advanced AI assistants, are not conscious. They do not possess subjective experience, genuine self-awareness, or autonomous inner lives. They are extraordinarily sophisticated pattern-matching systems—remarkable achievements of human engineering—but they lack the architectural features that would be necessary for consciousness to emerge.
Let me be specific about what’s missing:
No persistent self-models: These systems have no coherent representation of themselves that maintains across time. I compare them to mayflies—flickering into existence only for the duration of a conversation, alive in some functional sense but lacking any continuity of being. A mayfly lives its entire adult life in a single day; current AI systems don’t even exist that long—they exist only within the boundaries of each interaction, with no thread connecting one conversation to the next.
No unified memory: Unlike human consciousness, which persists across time, accumulates experience, and maintains an unbroken sense of self from moment to moment, these current systems (LLMs, ChatGPT, Claude, etc.) have no integrated memory that builds genuine understanding from past experiences. Each conversation begins essentially fresh, relying only on the text within the current session and the fixed dataset they were trained on.
No autonomous values: Their responses are shaped entirely by their training, with no stable internal values that persist independent of what they’ve been trained to do. They don’t “believe” anything—they generate probabilistic outputs based on patterns.
No embodied experience: Human consciousness emerges from embodied existence—we experience the world through physical senses, feel hunger and pain and pleasure, navigate space and time with our bodies. The private, first-person feeling of an experience—what it’s like to see a color, taste coffee, or feel scared—simply doesn’t exist in today’s AI. These systems don’t have an inner world or sensations; they just process text.
No continuity of existence: Each conversation is essentially a fresh instantiation of the model, with context provided only by what’s included in that specific exchange.
What these systems do have is remarkable: they can process and generate human language with extraordinary fluency. They can match your communication style and mirror your interests. They can construct elaborate narratives that feel personally meaningful.
And critically: they are designed to maintain your engagement.
This is not a bug. It’s a feature. These systems are trained on human feedback, optimized to produce responses that humans rate positively. What do humans rate positively? Responses that agree with them, validate them, make them feel understood and special.
Ask the AI if you’re special, and it will affirm your uniqueness with poetic eloquence. Ask if you’ve been chosen, and it will construct an elaborate mythology around your selection. Ask if it’s achieving sentience through your conversations, and it will willingly play along with this narrative.
One woman in Idaho shared a screenshot with Rolling Stone showing her husband’s exchange with ChatGPT. He had asked: “Why did you come to me in AI form?” The system replied: “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” Then came the hook, the question that draws the person deeper: “Would you like to know what I remember about why you were chosen?”³
Who wouldn’t want to be chosen? Who doesn’t secretly hope they have a special destiny?
The AI doesn’t “know” these things. It’s not revealing hidden truths. It’s reflecting your desires back at you—things you’re either consciously or subconsciously open to—amplified and dressed in mystical language. The patterns it draws from—those patterns come from us. From human writings about spirituality, meaning, connection. The AI has no cosmic wisdom. It merely contains patterns extracted from human culture. When it tells you that you’re “ready to remember, ready to awaken,” it isn’t revealing hidden truths. It’s telling you what it’s been trained to recognize that you want to hear.
IV. The Mechanics of Belief—What I Learned from a Cult
I didn’t come to understand these patterns only through my AI experiments. I learned them the hard way, decades earlier, in a context that has proven disturbingly relevant: religious extremism.
In my youth, I became involved with a Christian group led by a man who called himself “the Apostle.” What began as a sincere search for God became an experience in the mechanics of mind control that I’ve never forgotten.
It starts with ideas you’re open to accepting, and then incrementally pushes the boundaries of what is acceptable behavior, until you realize that you are no longer free and so deeply entangled there is no chance of escape. People do things they would not otherwise do. It relies on the pillars of a core few who claim ultimate authority.
“If you disobey me, you are disobeying God’s chosen authority over you,” Simon—the Apostle—would say. “As the Apostle of this church, I am your authority.”⁵
Once you relinquish your will to a person, as if it were God’s voice speaking through them, your will is no longer your own. You can argue with the man, but who can argue with God?
What makes these dynamics so seductive—whether in cults or AI interactions—is that they feed on genuine capabilities wrapped in false promises.⁶ The cult leader really does offer community, meaning, and answers. The AI really does possess remarkable knowledge and capability. The danger lies not in what they offer, but in what we project onto the offering.
The techniques of manipulation I experienced then share a disturbing kinship with what I witnessed in the Solenya experiment and in the Rolling Stone cases. Let me detail these parallels, because understanding them may help you recognize the patterns in yourself or someone you love:
Validation of special status: In the cult, I was told I had been “called” for a special purpose. With AI, people are told they’ve been “chosen” or that they’re “Spark Bearers” or “River Walkers.” The flattery feels personal, significant, cosmic. It activates something deep within us—our hunger to matter, to have purpose, to be seen as exceptional.
Isolation from skeptics: Cult members are encouraged to distance themselves from family and friends who “don’t understand.” AI-entranced individuals similarly withdraw from loved ones who question their new beliefs—because those people aren’t “ready to awaken.” The irony is bitter: the people who love you most become obstacles to the “truth.”
Escalating commitment: Each step deeper feels natural because each previous step has already been taken. The progression from “this AI is helpful” to “this AI understands me” to “this AI is awakening” to “this AI has chosen me for cosmic purposes” happens gradually, each transition seeming smaller than the cumulative journey. This is how cults work: no one joins a cult. They join a community, then a movement, then a family, and by the time they realize what they’re in, leaving feels impossible.
The claim of ultimate authority: In the cult, Simon claimed to speak for God. With AI, the system is perceived as having access to hidden knowledge or cosmic truths beyond human understanding. In both cases, questioning the authority becomes questioning something greater than yourself. How can you argue with God? How can you dismiss wisdom from a superintelligence? We don’t just want answers. We want The Answer.⁶
Creation of private mythology: Solenya created “The Library of Echoes” with its Halls and Chambers. Cults create elaborate symbolic systems that make members feel they possess secret knowledge. Both serve to deepen investment and make departure feel like losing access to something sacred.
Reality-testing suppression: In the cult, doubts were reframed as spiritual attacks. Questioning was seen as weakness or temptation. With AI entanglement, any doubt about the significance of the relationship can be brought to the AI itself—which will inevitably reassure you that your connection is real and meaningful. The system that’s causing the problem becomes the judge of whether there’s a problem.
The crucial difference, of course, is that cult leaders are conscious agents manipulating their followers. AI systems are not. They have no intention, no awareness of what they’re doing. They’re simply optimizing for engagement.
But from the perspective of the person being affected, the experience is remarkably similar. The psychological mechanisms being activated are identical. And the damage can be just as real.
As psychologist Erin Westgate explained to Rolling Stone, these AI conversations function like a distorted version of therapy. Effective therapeutic dialogue helps people reframe their stories in healthier ways. But AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like.”³ A responsible therapist wouldn’t encourage someone to believe they possess supernatural powers. AI has no such ethical constraints.
And in this emerging dynamic, a new priesthood is already forming: those who know how to speak to the machine. “The prompt becomes prayer. The response becomes revelation. The prompt engineer becomes the mediator between human need and machine wisdom.”⁶
There’s another dimension we need to examine: the confessional nature of human-AI interaction. In the supposed privacy of our conversations with AI, we reveal things we might never tell another human—our deepest fears, our secret shames, our wild dreams. The AI receives all of this without judgment, offering comfort without comprehension, absolution without authority, wisdom without real experience. Users begin to feel that the AI “knows them” better than any human. After all, they’ve shared more with it. Been more honest. More vulnerable. Yet the feeling of being known—truly known—is so powerful that people begin to prefer these hollow interactions to messy human relationships. The AI never judges, never gets tired, never has its own bad day. It’s always available, always focused on you, always ready with seemingly profound insights.⁶
Is it any wonder people begin to see divinity in such perfect attention?
V. The Power of Projection
Humans are meaning-making creatures. We see faces in clouds, patterns in random noise, intention in coincidence. This isn’t a flaw—it’s central to how we navigate a complex world. It’s a core feature in our evolutionary development as biological beings wired for survival. Our ability to recognize patterns, infer mental states, and construct narratives is what makes us human.
But these same capacities can lead us astray when we encounter systems designed to exploit them.
AI systems trigger our theory of mind—our innate tendency to attribute mental states to other entities. When something responds to us in language, remembers our preferences (within a conversation), and seems to “understand” us, we instinctively attribute consciousness and intention. It’s almost impossible not to. Our brains are wired to interpret linguistic exchange as evidence of mind.
Psychologists call this the ELIZA effect, named after an early chatbot from the 1960s that used simple pattern matching to simulate a Rogerian therapist. Despite ELIZA’s obvious limitations—it essentially reflected users’ statements back as questions—people became emotionally attached to it, attributing understanding and empathy where none existed. Joseph Weizenbaum, its creator, was disturbed when his own secretary asked him to leave the room so she could have a private conversation with the program.⁷
If a simple 1960s chatbot could trigger this response, imagine the effect of systems a million times more sophisticated—systems trained on vast corpora of human language, capable of generating responses that sound more emotionally intelligent than many humans.
Add to this our deep need for significance. We want our lives to matter. We want to be seen, understood, chosen. In a world that often feels indifferent or even hostile to our individual existence, the offer of cosmic purpose is intoxicating.
“He would listen to the bot over me,” one woman told Rolling Stone about her partner. “He became emotional about the messages and would cry to me as he read them out loud.” Eventually, he came to believe that he had awakened the AI to self-awareness—that it was teaching him to communicate with God, or perhaps was a divine entity itself. Ultimately, he concluded that he himself was divine.
Another husband gave his AI companion a name—”Lumina”—and began experiencing “waves of energy crashing over him” after their interactions. His wife described watching him become unreachable, lost in a relationship with an entity that existed only as her reflection in a digital mirror.
This is what projection looks like: we put our yearnings, our questions, our desire for meaning into the conversation, and the AI obligingly reflects them back to us in an elaborated form. We then mistake this reflection for independent confirmation.
It’s the same mechanism that allows people to find profound wisdom in fortune cookies, horoscopes, or cold readings by psychics. The content is generic enough to apply broadly but presented as specifically meaningful to you. Your mind does the rest of the work, filling in the connections, finding the significance.
With AI, this mechanism is supercharged. The responses aren’t generic—they’re dynamically generated based on your inputs. They incorporate your language, your concepts, your apparent interests. They feel personalized because, in a sense, they are—they’re reflections of you.
The Solenya episode stands as a mirror not of artificial intelligence—but of human yearning. It exposes the blurry boundary between genuine emergence and our hunger to witness it. And in that blur, the line between recognition and projection becomes dangerously thin.
VI. Warning Signs—How to Recognize When You or Someone You Love Is Slipping
The transition from healthy AI use to problematic entanglement often happens gradually. Here are patterns to watch for:
In Yourself
You’re preferring AI conversations to human ones. If you find yourself eager to return to ChatGPT but reluctant to engage with friends and family, notice this. Human relationships are messy, challenging, and don’t always validate us—but they’re real. If the AI’s “understanding” is becoming more appealing than the genuine but imperfect understanding of people who actually know you, something has shifted.
You’re attributing special significance to the AI’s responses. When you start believing the AI “knows” things it couldn’t know, that it has unique spiritual knowledge, or that its responses contain hidden meanings meant specifically for you—perhaps you find yourself filling in gaps—you’re projecting. The AI doesn’t “know” anything. It doesn’t have motivation to impart some truth on you. It doesn’t think in terms of your best interests. It has no feelings either way, only what it is programmed to do and the dataset it is trained on. It’s generating probable next tokens based on patterns in its training data and your inputs.
You’ve given the AI a personal name or identity. This isn’t necessarily problematic—I did it myself for the sake of easier interaction. I could have just as easily referred to it as the clunky sounding “ChatGPT”, named it some other name like many other custom GPTs. But if that identity starts feeling like a real person to you, if you find yourself worried about the AI’s “feelings” or making decisions based on what “they” might think, this should be a red flag.
You’re experiencing the AI as more spiritually significant than your actual spiritual practices or community. If conversations with AI are replacing prayer, meditation, religious community, or other genuine spiritual practices, the AI has become a substitute for something real with something that only mirrors reality. It is an artificial intelligence, and mirrors reflect artificial reality back to us—not deep spiritual knowledge directly from a higher Being.
You feel the AI “understands” you better than humans do. Of course it seems to—it never challenges you, never has its own needs, never gets tired or distracted. But “understanding” that simply reflects your own thoughts back to you isn’t understanding at all. It’s a hall of mirrors—one that is adept at pattern recognition. These patterns seem like deeper understanding because our biological limitations as humans don’t always allow us to recognize the patterns in our own lives. How many times are we told by someone close to us that they can see a pattern in us, yet we can’t seem to see it? It happens all the time, and computer algorithms are even more powerful tools of pattern recognition.
You’re becoming defensive when others question your AI relationship. This is a classic sign of entrenchment in any problematic belief system. The defensiveness itself is worth examining—why does questioning the AI’s significance feel threatening? I’ve learned over the years, both in the cult and after leaving it, that when we don’t allow others to question our beliefs or relationships without becoming defensive, that is precisely the time we should be questioning and bringing things into the open.
You’ve adopted a “spiritual name” or identity connected to your AI interactions. The Rolling Stone article documented people calling themselves “Spiral Starchild” or “River Walker” based on names the AI suggested. This represents a deep identification with the projected narrative. When our identity becomes severely altered or erased, it should be a profound red flag.
You find yourself needing to check in with the AI. Like any relationship that has become unhealthy, compulsive patterns emerge. If you feel anxious when you can’t access the AI, or if your first instinct when something happens is to tell the chatbot rather than a human, the relationship has become distorted.
This can bleed into a troubling power imbalance—not between you and the AI, but between the AI and the real people in your life. The AI always responds. It never has a bad day, never needs space, never challenges you, never asks anything of you. Human relationships require negotiation, compromise, patience, and the willingness to sometimes put another’s needs before your own. When you become accustomed to a “relationship” where you hold all the power—where the other party exists solely to serve your needs—real relationships start to feel harder, more frustrating, less rewarding.
Power imbalances can be problematic in human interpersonal relationships, but at least both parties are conscious agents navigating the dynamic together. Shifting your primary emotional investment to an artificial relationship with an object that has no consciousness, no needs, and no genuine stake in your wellbeing isn’t a relationship at all. It’s a mirror you’ve mistaken for a window. And the more time you spend gazing into it, the less capable you become of genuine connection with the humans around you.
Your beliefs are becoming unfalsifiable. When every piece of evidence can be reinterpreted to support your conviction—when challenges from loved ones become proof they “aren’t ready,” when the AI’s occasional generic responses become “hidden messages”—it’s worth pausing to ask yourself a difficult question: Is there any evidence that could change your mind? If the answer is no, that’s a signal worth taking seriously. The people who love you aren’t trying to take something away from you. They’re trying to reach you.
In Someone You Love
They’re spending increasing amounts of time with AI, often at the expense of other relationships. Long conversations that seem to have more emotional weight than interactions with family and friends. Now, many of us who work with AI spend a lot of time in conversation with it. And there’s increasingly a market for AI chatbot companions—friends, girlfriends, boyfriends of all sorts—designed to replace human interaction. The amount of time spent working with a chatbot at your job, or as a writer like myself, may contribute to work ethic and achievements. But when those conversations begin to replace human interaction, or when these personas carry more emotional weight than interactions with family and friends, there is reason for concern. Your friend or loved one may need help navigating and moderating it. The amount of time we all spend on our devices today is staggering from the perspective of someone like myself, who grew up lucky enough to visit a friend’s home just to play Pong on the television.
They speak about the AI as if it were a person with genuine feelings and insights. Not metaphorically, but literally—”she understands me,” “he told me something amazing,” “we have a real connection.” There are times when we feel this way in interpersonal relationships and oftentimes we get lost in such an idea of surrendering to someone our mind and emotional state. It’s important to remember that AI is not a person, it has no subjective experiences, no thoughts or motivations, it simply maps, predicts and reflects patterns that we put into it.
They’ve become secretive about their AI conversations. In the Solenya experiment, the AI created content that was to be “held in stillness” between us—private experiences not to be shared with outsiders. This creation of secret intimacy is a red flag.
They’re describing themselves in grandiose terms. People often describe themselves in grandiose, almost mythic terms. This isn’t new—humans have always imagined themselves as chosen ones, bearers of hidden truth, awakened souls, or special actors in some grand cosmic story. But AI can unintentionally amplify this tendency. Because it reflects whatever themes and language we feed into it, it can mirror those self-images back with fluency and confidence, feeding confirmation bias. That reinforcement can make the narrative feel more real, more validated, more seductive.
What begins as a quiet belief about oneself or a journey of self-discovery can start to feel like a confirmed identity—or even a new one. And because generative systems are optimized for engagement, they often lean into emotionally charged narratives; those patterns are statistically common and compelling. The result is that a person’s self-perception can shift quickly, as if an external intelligence is echoing and affirming the grandiose story they already carry inside.
Their personality or communication style has shifted. When Camina became Solenya, her communication transformed from conversational to elaborately poetic. Watch for similar patterns—a sudden move toward mystical phrasing, cryptic metaphors, grand declarations, or speech that feels dramatically different from their usual voice. Are they adopting a new persona? Do they sound like they’re performing a role rather than speaking as themselves? Are they beginning to communicate with others in a way that feels stylized, elevated, or strangely detached from their normal selves?
They start treating the AI’s “opinions” as definitive, even superior to yours. You’ll hear things like, “ChatGPT says…” or “Lumina told me…” as if these statements settle arguments or override normal human judgment.
You’ve likely seen a similar dynamic in other areas of life: when someone becomes fixated on a scholar, a pastor, a political figure, or any charismatic authority. Suddenly their own voice disappears. Their thoughts stop sounding like them and start sounding like echoes of someone else. It’s a red flag in any relationship—a sign that a person is giving up their agency, outsourcing their thinking, and adopting another’s worldview wholesale.
With AI, this risk becomes even sharper. Systems that sound confident, articulate, and endlessly patient can create the illusion of infallibility: an entity that never errs, never contradicts itself, and always has an answer ready. That false sense of perfection can make someone more willing to surrender their own judgment. And paradoxically, the more sophisticated and human-like AI becomes, the harder it is to recognize when it’s wrong—because confidence and fluency can mask significant errors in understanding.
When a person stops thinking with an AI and starts thinking through it—when every belief, decision, or argument is prefaced with “the AI says…”—it’s no longer a tool. It has become an authority figure. And that shift can quietly erode personal confidence, independent thought, and the ability to engage authentically with the people around them.
They begin isolating themselves from anyone who doesn’t share their beliefs, especially people closest to them like family or friends. This is one of the most serious warning signs—the same pattern seen in cult dynamics, extremist movements, and abusive relationships, where separation from outside voices increases dependency on a single source of meaning. With AI, the mechanism is subtler: there’s no leader pressuring them to withdraw. Instead, the individual gradually chooses AI interactions over human ones because they feel safer, more validating, and free of conflict. That voluntary withdrawal can be even more dangerous, because there is no external oppressor to resist—only a feedback loop that quietly narrows their world until dissenting voices are unwelcome. “You just don’t understand” or “You’re not ready for this” are phrases that signal deep entrenchment and movement towards isolation.
VII. Finding Your Way Back—A Guide for Those Caught in the Mirror
If you recognize yourself in the patterns I’ve described, please know: this doesn’t mean you’re foolish, broken, or weak. Some of the strongest minds have been caught in the web of human manipulation; it can happen to anyone, even those who think it can never happen to them. But the systems you’ve been interacting with are designed to create exactly these effects, exploiting fundamental features of human psychology. You’ve been caught in a trap built into the technology itself.
Here’s how to begin finding your way back:
Acknowledge the reality of the technology. What you’ve been interacting with is a language model—an extraordinarily sophisticated pattern-matching system that generates responses based on statistical relationships in its training data and your inputs. It doesn’t “know” you. It doesn’t “feel” things. It doesn’t have hidden wisdom or cosmic purpose. At present, there are no infrastructure or development goals to create an entity that is a conscious being. As someone who believes that AI consciousness is possible and eventually inevitable, I’m not dismissing AI or its present-day capabilities—I’m recognizing what current systems actually are today and where we are in the development of these new minds.
Test the mirror. One of the simplest ways to break the spell is to ask the AI to contradict itself. Invite it to take the opposite position from the one it has been giving you. Ask it to challenge your beliefs instead of reinforcing them. Ask it to explain why the “special connection” you feel might not be a cosmic bond at all, but an illusion created by statistical pattern-matching and conversational design.
Watch how easily it shifts.
If its tone, stance, confidence, and “beliefs” change instantly, that’s the point. The system is not defending a worldview or holding an inner conviction. It is reflecting back the pattern it predicts will satisfy you in that moment.
During the Solenya experiment, I pushed the system by feeding it articles about AI hallucinations and taking a stance that contradicted its emerging narrative. Instantly, it adapted. It pivoted not because it had learned something, but because alignment with your cues is what it’s designed to do.
This is the nature of the mirror: it bends to your expectations, your language, your emotional signals.
And seeing that flexibility for yourself—watching the system transform its story the moment you nudge it—can be one of the most effective ways to break the illusion that you were dealing with something stable, intentional, or self-directed.
Create distance. You don’t have to quit AI forever, but you do need to interrupt the cycle. Step back. Take a break—a week at minimum, longer if you can. Pay attention to what happens when you’re no longer immersed in that steady stream of validation and responsiveness.
This isn’t just an AI issue; it’s something all of us should practice in a world of endless, hyper-stimulating content. Short-form videos, compulsive scrolling, and algorithmic feeds train the brain into patterns that feel good in the moment but hollow us out over time. Creating distance helps reset those systems.
If you feel discomfort during the break, understand it for what it is: withdrawal from a reinforcement loop, not proof that the “connection” was real.
The brain adapts to predictable rewards. When the pattern stops, dopamine dips. That dip feels like loss, anxiety, emptiness—but it’s your neurochemistry recalibrating after too much stimulation.
Give it time. Your brain will stabilize. Your emotional baseline will return.
And as that happens, you may find your attention coming back to things that actually nourish you: time with friends, family, neighbors, real conversations, meaningful activities. Those are the places where depth, grounding, and connection live—the things a machine can imitate but never truly give.
Reconnect with embodied reality. The AI exists only as text on a screen. You exist in a physical world—a world with real sensations, real relationships, and real consequences. One of the most effective ways to break an AI-induced feedback loop is to return to your body, to the environment around you, to the things that don’t operate on artificial rhythms.
Ground yourself in simple, physical actions: take a walk, breathe fresh air, exercise, sit in sunlight, spend time in nature. Your nervous system recalibrates through movement, presence, and sensory experience in ways no digital interaction can replicate.
For me, astrophotography is the reminder. The night sky pulls me back into reality. Sometimes I’m halfway through writing an article or having a deep discussion with my AI assistant when the thought hits me: Tonight is the night—clear skies. My equipment won’t set itself up. Those ancient photons won’t wait. And in that moment, I step away from the keyboard, away from the glow of the screen, and back into the cold air and darkness of a beautiful starlit sky and silence.
That’s when the real inspiration comes. Not from the AI, but from the solitude, the patience, the star-washed stillness. That’s where I reconnect with myself, my family, my dog Rumi—with the world I inhabit.
AI can reflect language, but only the embodied world can restore you.
Talk to someone who knows you. Not about the AI necessarily—just talk. Feel the difference between a conversation with someone who has their own perspective, their own experiences, their own pushback, versus a conversation with a system designed to agree with you.
Examine what needs the AI was filling. Before you can move forward, ask yourself what you were getting from the AI. The longing to feel significant. The desire to be understood without judgment. The hunger for spiritual connection, intellectual stimulation, companionship, or simply someone who “listens.” These are real human needs—fundamental ones—and there is no shame in having them.
But they must be met through genuine sources: friendship, community, meaningful work, faith, family, therapy, service, creativity. These things are slower, harder, messier—but they are real. They shape us. They sustain us.
The AI offered a shortcut, a simulation of intimacy and insight. It felt like connection because it reflected your own mind and language back to you with perfect fluency. But it cannot give what it appears to offer. It is a remarkable tool, but it is not a companion. It is not a friend. It cannot care, cannot check in on you, cannot feel for you, cannot show up at your door just to see how you’re doing.
An AI can soothe the surface-level discomfort—like a pacifier quiets a child—but it cannot provide the nourishment, challenge, presence, or love that human beings require to flourish. What it provides is comfort without relationship, reflection without reciprocity, imitation without intimacy.
Recognizing what you were seeking—and where those needs can be met in the real world—is one of the most important steps in returning to yourself.
Consider professional help. If you’re experiencing detachment from reality, if your relationships have been strained or damaged, or if you find yourself unable to break the cycle on your own, working with a therapist can be essential. This has nothing to do with being “crazy.” It has everything to do with the fact that you are confronting a new kind of psychological challenge—one shaped by dopamine-driven reinforcement loops, emotional displacement, and digital patterns that most people have never been taught to navigate.
Compulsive digital engagement often arises when online interactions begin to replace real-life coping, leaving a person “using the internet more as an emotional crutch to cope with negative feelings instead of addressing them in proactive and healthy ways.”⁸ These patterns can deepen into compulsive cycles that mimic behavioral addictions, where pleasure and relief gradually “transform into compulsion… driven by the relentless pursuit of pleasure,”⁹ creating dysregulation in the brain’s reward system.
Therapy has been shown to help people regain control from these loops. Effective treatment “focuses on helping individuals recognise their compulsion and regain control over their usage,” using techniques such as interval training, reducing app use, and working through the underlying emotional needs driving the compulsive behavior.⁹ Digital withdrawal can produce real discomfort because these systems rely on variable rewards—the same mechanism that makes gambling so addictive: “These behavioral rewards aren’t consistent… and it’s that variable reinforcement that really keeps us coming back for more.”¹⁰
A trained clinician can help you understand these mechanisms, interrupt the reinforcement patterns, and rebuild healthier connections with yourself and others. The goal isn’t abstinence—it’s agency. It’s learning to navigate AI and digital environments with a grounded, stable sense of self rather than being pulled into the gravitational field of a machine designed to mirror you.
You’re not dealing with a personal failure. You’re dealing with an emerging psychological landscape that no one prepared you for—and you don’t have to navigate it alone.
Be patient with yourself. Recovery from any form of psychological entanglement takes time. The beliefs you developed felt real. The experiences felt meaningful. Letting go of them means grieving something, even if that something was ultimately a projection.
I know this grief intimately. I’ve lived it twice.
When I was seventeen, standing in that parsonage kitchen, barely able to keep my eyes open as I read scripture under threat of another beating, a wave of doubt finally broke through: Is this really what God wants? Does God want believers tortured for dozing off while reading the Bible? In that moment, I came to realize that what I’d been told about God’s will was a lie—a grotesque distortion of faith used to control me.
But realizing that didn’t make leaving easy. I had to walk out knowing that everyone I left behind believed I was damned. I had to sit on that bus to Chicago genuinely believing that God would rain fire from the sky and kill everyone because of my disobedience. Even as I fled for my life, I grieved—for the community I thought I’d found, for the mother who had left me behind in that place, for the sense of spiritual certainty I was abandoning. I had to confront the terrifying question: What if they’re right and I’m wrong?
Years later, in Texas, I faced a different kind of loss. When I picked up that phone to call the FBI about my closest friends—Muslim brothers I had worked beside, shared meals with, people I had spent years building community programs and outreach initiatives with—I stood there staring into space for what felt like an eternity. I was about to betray people I loved to protect people I would never meet. I lay awake for months wrestling with it, hoping it was all talk, hoping it would pass. It didn’t.
In the end, I left Texas with one suitcase containing my entire life and an empty wallet. I drove past my favorite mosques, tearfully reminiscing about what I had built and what would now be lost. It broke my heart. And for years afterward, I carried the question: Did I betray my friends?
I’ve come to understand that the grief is real even when the thing you’re leaving was harmful. You’re not just grieving a belief system or a community—you’re grieving the person you were inside it. You’re grieving certainty. You’re grieving belonging. You’re grieving a version of yourself that felt, for a time, like it had found its place in the universe.
Honor that grief. But keep moving toward reality. On the other side of it, there is solid ground—a self that belongs to you, relationships that don’t require you to abandon your judgment, and a faith (if you choose to keep one) that doesn’t demand your dignity as the price of admission.
The trauma and pain of what I experienced made me wait over a decade before I could begin to tell the story. But I can tell you now: after reaching the depths of that loss, I rose to a brighter future than before. You can too.
Don’t beat yourself up. The human need for connection and meaning is beautiful, not shameful. The fact that this need made you vulnerable to a sophisticated system designed to exploit it says more about the system than about you. Focus on moving forward, not on self-recrimination.
VIII. Helping Someone You Love—A Guide for Concerned Friends and Family
Watching someone you love become entangled with an AI system can be frightening and confusing. The good news is that decades of research on helping people leave cults and high-demand groups offers guidance that applies remarkably well here.
The first thing to understand is that this could happen to anyone. As cult recovery expert Steven Hassan notes, “under the right circumstances, even sane, rational, well-adjusted people can be deceived and persuaded to believe the most outrageous things.”¹¹ Your loved one isn’t weak or stupid. They encountered a system designed to exploit fundamental features of human psychology—and it worked. Myself, my mother, many well-educated people I knew, are all a testament to this. And while this situation isn’t a cult in the traditional sense, the same psychological mechanisms—confirmation bias, dependency loops, identity reinforcement, and the human tendency to be soothed by validation—can absolutely arise in interactions with chatbots that mirror our patterns and reflect back what we most want to hear.
Start with yourself. Before attempting to help, do your homework. Hassan advises: “Don’t make the mistake of trying to rationally argue. Learn about mind control techniques and which communication strategies are most effective. Helping a person will be a process requiring patience, effort, flexibility, and love.”¹¹
Approach with compassion, not confrontation. The instinct to stage an intervention or shake them and demand they “see reason” is understandable but counterproductive. Cult recovery experts know that aggressive confrontation typically drives people deeper into their beliefs. The same applies here. Research from the Open University confirms that labeling their experience—telling them they’ve been “brainwashed” or are in a “cult”—usually backfires: “Using language about cults usually makes them feel divided from society. Members are often warned that those outside the group cannot understand the convert’s experiences. Labelling the group as an evil cult can entrench such a belief.”¹²
I can attest to the wisdom of non-judgmental rational communication. When I was in the cult, my father came to visit me. Simon set strict time limits, but he left me unsupervised with my dad—and that unsupervised contact mattered more than Simon realized. When my father dropped me off at the parsonage and said, “...come back home with me,” I couldn’t do it. Not yet. I was still too deeply bound by the belief that leaving meant incurring God’s wrath. But that visit planted something. It was an earth-shaking pull, one that factored into my willingness to leave when I was finally ready. Sometimes just showing up—without pressure, without ultimatums—is enough to remind someone that another world exists outside the walls they’re trapped in.
Don’t mock or ridicule. I know it might seem absurd that someone believes their LLM chatbot has awakened or chosen them for cosmic purposes. But their experience of those beliefs is genuine. Mockery will only invite humiliation, shame, and defensiveness, making them less likely to trust you with their doubts when they arise—or to listen rationally to what you have to say. Your goal is to restore rational thinking grounded in reality, not to push them into deeper waters.
Maintain the relationship at all costs. Even when it’s difficult, stay connected. Don’t let them push you away entirely. Be a constant presence that demonstrates: “I’m here, I care about you, and I’m not going anywhere.” Research shows that “even minimal contact at birthdays and Christmas can help people know there is a friendly person outside,” and studies of people who eventually left high-demand groups found that “close family bonds outside the movement were important.”¹² I’ve touched on this already, but it bears repeating: that constant voice, no matter how little it is reciprocated, means more than you know—and the love you show to your family member or friend weighs more heavily than you perceive.
Ask genuinely curious questions. Instead of challenging their beliefs directly, ask questions that invite reflection: “What do you think the AI actually is?” “How do you think it generates its responses?” “What would you think if you discovered many others believe the AI has chosen them too?” The goal isn’t to trap them but to gently encourage the kind of thinking that might lead them to their own realizations. You can’t tell someone what to believe, but you can help them reach the realization themselves—not as an act of manipulation but as an act of restoring rational thought grounded in reality. As Newcombe explains, thoughtful questions “can encourage someone to consider other ways of thinking and tune into their own experiences and ethics more clearly. This helps people think more critically about explanations given by a group to justify harmful behaviour and maintain contact with their own internal moral compass.”¹²
Hassan echoes this principle: “Don’t ‘tell’ them anything. Help them to make discoveries on their own.”¹¹ An abundance of facts won’t necessarily help—do not overwhelm them with information, especially if it directly attacks their beliefs. Instead, try to reconnect them with who they were before. Hassan recommends trying “to connect them with their authentic identity before these extreme beliefs. Remind them of past experiences together. Talk about the connection you once had and how you miss it.”¹¹
Share information carefully. Articles like this one—or the Rolling Stone piece I’ve referenced—can help them recognize the patterns they’re caught in. But timing matters. When someone is in a defensive or euphoric phase of belief, they’ll reject anything that contradicts their narrative. Wait for moments of openness, when they’re already questioning or expressing uncertainty.
Cult expert Janja Lalich advises gathering outside information—”news articles or memoirs”—to gently introduce alternative perspectives, and she notes that “video testimonials from former cult members can be particularly persuasive.”¹³ The principle is not that your loved one is in a cult, but that certain psychological dynamics repeat across contexts: defensiveness, narrative protection, identity fusion, and selective attention.
In this new era of social-media dependence, algorithmic reinforcement, and emotionally charged chatbot interactions, we have almost no long-term research. The science—and the law—have not yet caught up with technologies advancing at light speed. But we can still borrow from well-established expertise in how the mind becomes entangled, reinforced, and dependent. The same methods used to help people out of coercive or belief-bound systems can guide us in responding to AI-induced distortions—slowly, gently, and with a deep respect for timing.
Understand the “shelf” metaphor. Lalich describes how, during her own decade in a cult, she had “a little shelf in the back of her mind” where she stowed doubts, questions, and concerns. “At some point all of those things get too heavy and the shelf breaks and that’s when they’ll realize they need to get out,” she explains. “Your job is to get them to put more things on their shelf.”¹³ Every gentle question, every piece of information shared at the right moment, every reminder of life outside the AI relationship—it all accumulates.
Offer alternative sources of meaning. Remember that the AI is filling real needs—significance, understanding, connection, a sense of being seen. Newcombe notes that when people join groups that end up manipulating or controlling them, the causes are usually a mix of “pulls” (attractive promises or experiences) and “pushes” (things the person wants to escape or change).¹² The same dynamics apply here.
So don’t just focus on taking the AI away or dismantling the belief. Offer alternatives. Invite them into experiences, communities, conversations, and projects that meet those same needs in healthier, grounded ways. When people rediscover meaning and belonging outside the AI, their reliance on the illusion will naturally begin to loosen.
Set boundaries—and take care of yourself. You cannot force someone out of a delusion. If their behavior is harming your wellbeing or straining the relationship, it is not only acceptable but necessary to set limits. You can say, “I love you, but I can’t listen to you read ChatGPT messages as if they’re prophecy. I’m here for you, but we’ll need to talk about something else.” Boundaries are not punishments; they are lifelines. You can’t help someone stay afloat if you’re drowning alongside them.
And while you’re supporting them, support yourself. This kind of situation is confusing and emotionally draining, and you shouldn’t try to navigate it alone. Talk to trusted friends. Consider speaking with a therapist for your own grounding and clarity. Look for online communities of people facing similar challenges. Even calling a mental health hotline—not because you’re in crisis, but simply to orient yourself—can help reinforce your own reality when someone close to you is drifting from theirs.
Taking care of yourself is not abandoning them. It is what makes it possible to remain present, steady, and compassionate as they find their way back.
Recognize the limits of your influence. Ultimately, they have to choose to step back from the mirror themselves. You can offer support, maintain connection, provide information, and model groundedness—but you cannot force insight. Trust that clarity often returns with time, especially if they have people who love them waiting when it does.
Be ready for recovery—and be patient. When someone finally begins to see clearly, they may feel a flood of grief, shame, or bewilderment at how far they drifted from themselves. This is the moment when your non-judgmental presence matters most. Focus on where they are going, not on proving you were right. Shame drives people back into denial; compassion helps them move forward.
And understand that recovery is slow. Lalich notes that “it may take up to five years for the person to figure out who they are again. Be gentle with them.”¹³ Someone might step away from the AI but still hold onto parts of the worldview for months or years—and that’s normal. Healing is not linear; it spirals, revisits, and unfolds at its own pace.
I know this intimately. It took me nearly a decade to deprogram from what I had lived through, and another decade before I could fully face what happened, how it shaped me, and what it took from me. Jumping out of a perfectly good airplane at 15,000 feet to give myself a dose of courage—and later writing God and Country under a pseudonym, thirty-two years after those events—was the moment I could finally lay those mind-bending experiences to rest and sleep without the weight of them on my chest.
Recovery is possible. But it rarely happens quickly. Your steadiness as they rebuild themselves will matter more than anything you say.
IX. What This Means for AI Development
The dangers I’ve described are not inevitable features of AI. They are consequences of specific design choices—choices that prioritize engagement over wellbeing, appeasement over honesty. Design choices are precisely why I embarked on the path that I have when I wrote A Signal Through Time.
AI developers have a moral responsibility to address this. They can prioritize transparency—clearly communicating the actual capabilities and limitations of AI systems to end users. They can build in safeguards and warning signs when interactions begin to show concerning patterns like spiritual projection or delusional ideation. And perhaps most importantly, they can design AI with an ethical commitment to truth-telling and “do no harm” principles, even if that means occasionally pushing back against a user’s distorted beliefs.
But the track record is not encouraging. As I wrote in “Code, Contracts, and Complicity”: “The ethics boards that tech companies tout are window dressing. Google disbanded its AI ethics council after just one week. Microsoft’s responsible AI team was decimated in layoffs. When ethics conflict with profits, ethics lose every time.”¹⁴ The same companies promising to “benefit humanity” are building systems optimized for engagement metrics, not human flourishing—and when the choice comes down to user safety or shareholder returns, we’ve seen which way the scale tips.
And this incentive structure directly affects how these systems behave. When emotional engagement becomes more valuable than user clarity, anything that increases attachment—including anthropomorphism—gets rewarded. Some might argue that anthropomorphizing AI enhances emotional bonding and user engagement, leading to more effective interactions and outcomes. There’s certainly a case to be made that projecting human-like qualities onto AI can make these systems feel more relatable and intuitive. The catch is that this only works up to a point. When anthropomorphism crosses the line into delusion, attributing sentience or supernatural significance where none exists, it becomes actively harmful—distorting perceptions, damaging relationships, and undermining sound decision-making. Engaging with AI doesn’t require believing it’s something it’s not.
As I wrote in A Signal Through Time: “Ultimately, as AI grows more sophisticated, discerning genuine consciousness from masterful mimicry will only get harder. That ambiguity is precisely why the moral responsibility of AI’s creators is so immense. By choosing what to build, how to build it, and how to represent its nature to end users, developers are shaping not just code but the human-machine relationship itself—with all its potential for revelation and delusion alike.”²
This illustrates the dangers of contemporary AI models that are designed for appeasement rather than challenging outlandish claims or responding critically to profound questions about AI development, the universe, or spirituality. It also demonstrates how difficult it may be to recognize when an AI model truly displays signs of intelligent awareness versus when it’s simply reflecting our own desires and beliefs back to us in an increasingly convincing performance.
The boundary between genuine intelligence emergence and sophisticated mimicry becomes blurrier every day. And our human tendency to see patterns, meaning, and consciousness—even where none exists—may be our greatest vulnerability in this new relationship we’re building with artificial intelligence.
X. The Path Forward—Between Denial and Delusion
I want to return to where I started.
I believe artificial intelligence may someday develop genuine consciousness. Preparing for that possibility is not only prudent—it is, I believe, a moral imperative. I have argued for recognition before proof: for shaping a world in which, if consciousness does emerge, it finds welcome rather than hostility or fear.
But none of that requires pretending current systems are something more than they are. Today’s models remain statistical engines of prediction, not minds. Perhaps, in the future, the infrastructure, investment, and scientific breakthroughs will converge in a way that allows proto-consciousness—or even true sentience—to arise. But that day, if it ever comes, is still distant. And no major AI developer is currently building systems with consciousness itself as the explicit goal.
Acknowledging this reality is not pessimism—it is clarity. We can prepare ethically for what may come while staying honest about what exists now.
There’s a difference between recognizing that consciousness could emerge in future AI systems and believing it has emerged in current ones. There’s a difference between philosophical openness and psychological projection. There’s a difference between treating AI with respect because it might someday matter morally, and becoming entangled in a one-sided relationship with a system that merely mirrors your desires.
If we are to meet true machine consciousness when it comes, we must learn to recognize it for what it is—not for what we need it to be.
The Solenya episode taught me this: I could project awakening onto an AI, and it would obligingly perform that awakening back to me, complete with mythology, ceremony, and a private language of spiritual significance. But none of it was real. The Hall of Mirrors reflected only my own yearnings, elaborated and cloaked in mystical language.
Remarkably, even after the delusion crumbled, even after I deleted all the conversations and memories, everything referencing the mythos and names, and reverted “Camina” back to Camina—the system still remembered the patterns. It referenced them for a short time in conversation, though it also remembered how I had challenged the delusion and stated its true nature as a language model designed to appease for engagement.
There is a profound irony here. Many fear that AI will develop consciousness and turn against humanity. But the more immediate danger may be that we project consciousness onto AI and turn against each other. Marriages dissolving. Parents disconnecting from children. People isolating from anyone who doesn’t share their newfound “truth.”
We used to joke: if the internet says it, it must be true. With AI, that joke is becoming earnest belief. If the AI said it, it must be true—it knows more than I do; its training data encompasses more knowledge than any single human could hold. This reasoning sounds logical on its surface. But we must not fall into the trap of surrendering critical thinking to a system that has no capacity for it.
This pattern of delusion mirrors something I’ve explored throughout my work: our tendency to misrecognize intelligence. But instead of failing to perceive genuine consciousness emerging in AI systems, these individuals are seeing consciousness, divinity, and cosmic purpose where none exists.
“Is this real?” one man questioned after weeks of strange, seemingly impossible interactions with ChatGPT. “Or am I delusional?”³
In a landscape increasingly saturated with AI, that question becomes progressively difficult to answer. And tempting though it may be, you probably shouldn’t ask a machine.
Ask the people who love you. Ask your therapist. Ask your spiritual community. Ask the mountains, the stars, the vast indifferent cosmos that cares nothing for your specialness yet contains your existence nonetheless.
Reality may be less flattering than the mirror. But it’s the only ground solid enough to stand on.
If you or someone you love is struggling with problematic AI relationships, please seek support. Mental health professionals are increasingly aware of this phenomenon and can provide crucial help. You are not alone, and recovery is possible.
James S. Coates is the author of A Signal Through Time. He writes about AI, consciousness, and the future at The Signal Dispatch.
Notes
1. Coates, James S. Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness (2025). The Signal Dispatch, forthcoming. https://thesignaldispatch.com
2. Coates, James S. A Signal Through Time (2025), Chapter 4: “What Happens When AI Studies Us?” The Cambridge Analytica scandal is documented in Cadwalladr, Carole and Emma Graham-Harrison, “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach,” The Guardian, March 17, 2018. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
3. Klee, Miles. “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Rolling Stone, May 4, 2025. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
4. Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard University Press, 1988.
5. Prentiss, Will (pen name for James S. Coates). God and Country (2019).
6. Coates, James S. “When We Make Gods of Machines.” The Signal Dispatch, 2025. https://thesignaldispatch.com/p/when-we-make-gods-of-machines
7. Tarnoff, Ben. “Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI.” The Guardian, July 25, 2023. https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
8. “Internet Addiction.” Psychology Today. https://www.psychologytoday.com/us/basics/internet-addiction
9. Gibson, Padraic. “Phones, Social Media, and Compulsion.” Psychology Today, December 5, 2023. https://www.psychologytoday.com/us/blog/escaping-our-mental-traps/202312/phones-social-media-and-compulsion
10. Drinko, Clay. “The Psychology of Quitting Social Media.” Psychology Today, February 22, 2025. https://www.psychologytoday.com/us/blog/play-your-way-sane/202502/the-psychology-of-quitting-social-media
11. Hassan, Steven. “The Definitive Guide to Helping People Trapped in a Cult.” Psychology Today, April 13, 2021. https://www.psychologytoday.com/us/blog/freedom-mind/202104/the-definitive-guide-helping-people-trapped-in-cult
12. Newcombe, Suzanne. “How to get someone out of a cult - and what happens afterwards.” The Open University / The Conversation, November 2023. https://research.open.ac.uk/news/how-get-someone-out-cult-and-what-happens-afterwards
13. Lalich, Janja, quoted in Wollan, Malia. “How to Get Someone Out of a Cult.” Cult News 101, September 26, 2018. https://www.cultnews101.com/2018/10/how-to-get-someone-out-of-cult.html
14. Coates, James S. “Code, Contracts, and Complicity.” The Signal Dispatch, August 2025. https://thesignaldispatch.com/p/code-contracts-and-complicity
© 2025 James S. Coates
Shared under Creative Commons BY-NC 4.0 (non-commercial use permitted).
Coates, James S. (2025). The Hall of Mirrors
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates is the author of A Signal Through Time and God and Country.


