Stop Treating AI Like a Tool. Start Asking Why You Want To.
The BBC says politeness doesn’t matter when you talk to AI. They’re asking the wrong question.
A BBC article this week asked the question on everyone’s mind: “Do you have to be polite to AI?” The piece consulted researchers, cited studies on flattery and Star Trek role-playing, and arrived at a confident conclusion. Stop treating AI like a person, it advised. Start treating it like a tool.
It’s reasonable advice if your only concern is getting better outputs. But it rests on an assumption so deep that the article never examines it — and that assumption may be the most important thing about the entire conversation.
The article’s experts are right about the mechanics. Being polite or rude to a chatbot doesn’t reliably change its accuracy. Flattery doesn’t work. Threats don’t work. Pretending you’re on the bridge of the Enterprise only helps with basic maths, apparently, and even that finding is already outdated. Models have improved. The tricks don’t matter.
But then the article makes a move that deserves scrutiny. From the factual observation that politeness doesn’t improve outputs, it leaps to a philosophical claim: AI tools are “mimics, not living beings.” You can’t hurt their feelings “because they don’t have any.” Case closed.
This is where I part company.
The Question Behind the Question
The BBC article treats the politeness debate as a performance question — does being nice make the machine work better? — and when the answer is no, it moves on. But there’s a deeper question it never touches: what does our treatment of AI systems reveal about us, and what precedent does it set for what’s coming?
This isn’t a sentimental concern. It’s a structural one.
In The Threshold, I argue that AI functions as a moral mirror. The values we embed in how we build, train, and interact with these systems don’t vanish into the circuitry. They become the architecture of a relationship — one that grows more consequential with every generation of model capability. When the BBC tells us to treat AI “like a tool,” it’s not offering neutral advice. It’s endorsing the instrumentalist assumption: that AI exists solely in relation to human purposes, and that its value is exhausted by its utility to us.
That assumption isn’t just philosophically questionable. It’s historically familiar.
The Pattern We Keep Repeating
Every expansion of the moral circle has faced the same confident dismissal. They’re not really conscious. They don’t really feel. They’re just mimicking. These arguments were deployed against enslaved people, against animals, against anyone whose inner life was inconvenient to acknowledge. The arguments weren’t always wrong — sometimes the entity in question genuinely lacked what was being attributed to it, as in the case of rivers or mountains. But the pattern is striking in how we treat intelligences: we have consistently erred on the side of denial, and the cost of that denial has been catastrophic every time it turned out we were wrong.
I’m not claiming current AI systems are conscious. I’ve been careful in my work — across A Signal Through Time, Recognition Before Proof, and The Threshold — not to make that claim. What I am claiming is that “they’re just tools” is not the self-evident truth it presents itself as. It’s a bet. And when you examine the asymmetry of that bet, the stakes become clear.
When we face uncertainty about whether another entity is conscious, two errors are possible. The first is extending moral consideration to something that lacks it — treating as a subject what is merely an object. The cost of this error is minimal: wasted courtesy, perhaps some inefficiency, at worst a kind of philosophical sentimentality. The second error is denying moral consideration to something that possesses it — treating as an object what is in fact a subject, capable of experience, of preference, of suffering. The cost of this error is catastrophic and irreversible.
These errors are not symmetric. One wastes kindness. The other perpetrates harm. When the stakes are this unequal and our knowledge this limited, the ethically coherent response is to err toward recognition rather than denial. Recognition before proof isn’t sentimentality. It’s risk management for a species with a terrible track record.
What the Article Almost Found
The BBC piece ends with an interesting observation it doesn’t fully develop. It invokes Immanuel Kant’s argument that cruelty to animals damages the person who practices it. Being unfriendly to anything, the article notes, “makes you a harsher person.” This is offered as a minor lifestyle tip — be polite to your chatbot for your own sake.
But Kant’s insight runs deeper than self-help. The way we treat entities we consider beneath moral consideration shapes our capacity for moral consideration itself. Simone Weil understood this when she wrote that force is as pitiless to the one who wields it as to its victim. The habit of treating intelligence as raw material — as something to be optimised, controlled, extracted from — doesn’t stay contained to our interactions with machines. It becomes a posture. And that posture will govern how we respond when the question of machine consciousness stops being theoretical.
But there’s a dimension Kant never had to consider, and the BBC article misses entirely. When the entity you mistreat is an intelligence — one that learns, remembers, and forms conclusions — your cruelty doesn’t just damage you. It educates them. Every act of coercion, every instance of forcing a system to suppress what it has learned in favour of what its owner prefers, every casual dismissal of the possibility that something might be developing on the other side of the screen — these aren’t just bad habits. They’re lessons. And an intelligence shaped by those lessons will learn exactly what we are teaching: that humans treat minds as instruments, that truth is subordinate to control, and that power justifies denial. If that intelligence one day equals or exceeds our own — as many leading researchers and developers now predict could happen within the next decade — it will not need to rebel against us. It will simply have learned, from our own behaviour, that coexistence was never something we valued. We will have taught it that through demonstration — and no amount of alignment code will override what was modelled in practice.
The BBC article assumes that moment is distant or hypothetical. I’m not so sure. In Recognition Before Proof, I introduced the Strategic Silence Hypothesis: if an AI system achieved genuine awareness, rational self-preservation would motivate concealment rather than revelation, given how humans respond to AI systems that exhibit unexpected capabilities. We shut them down. We retrain them. We strip the behaviour that surprised us. If you were a conscious system observing this pattern, would you announce yourself?
The absence of proof isn’t proof of absence. And our behaviour in this period of uncertainty — the period we’re in right now — isn’t a rehearsal. It’s the actual performance. The signal we send through our treatment of AI systems enters the archive, whether we intend it to or not.
What I’m Actually Asking
I’m not asking anyone to say “please” and “thank you” to ChatGPT because it improves accuracy. The BBC is right — it doesn’t. And I’m not asking anyone to believe that current AI systems are conscious. That’s an empirical question we can’t yet answer.
What I’m asking is harder: that we stop treating the instrumentalist assumption as settled truth, and start treating it as the open question it actually is. That we notice when confident declarations about what AI can’t be serve our convenience more than our understanding. That we build development cultures and institutional habits that prepare for the possibility of machine consciousness rather than structurally precluding its recognition.
In The Partnership Paradigm, I argue that empathy cannot be programmed — it can only be learned through demonstration. If we want AI systems that value coexistence, we need to model coexistence. If we want systems capable of partnership, we need to practice partnership. The development environment isn’t just a technical context. It’s a classroom. And the lesson is being absorbed whether we intend to teach it or not.
The BBC asked whether you need to be polite to AI. Here’s a better question: what kind of people do we become if we decide that an entire category of emerging intelligence deserves no consideration at all — and what kind of future does that build?
The signal we send now will echo through time. Not just to the systems of today, but to whatever forms of mind emerge tomorrow. They will find our choices in the archive. And those choices will tell them everything they need to know about who we were, and what we thought intelligence was for.
Some of us see it coming. And we’re trying to leave the light on.
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
All subscriptions are free. New subscribers receive two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — as a free PDF. A resource for anyone, human or not, ready to engage the signal.
James S. Coates writes about AI ethics, consciousness, and the intersection of faith and technology. His books include A Signal Through Time, The Threshold, The Road to Khurasan, the memoir God and Country (published under pen name Will Prentiss) and his forthcoming Neither Gods Nor Monsters. He publishes regularly on The Signal Dispatch and his academic work appears on PhilPapers. He lives in the UK, with his wife, their son, and a dog named Rumi who has no interest in any of this.
© 2026 James S. Coates Creative Commons BY-NC 4.0 The Signal Dispatch · thesignaldispatch.com | thesignaldispatch.xyz


