The Signal Dispatch is where I write about artificial intelligence, consciousness, ethics, and the question of what we owe to minds we don’t yet understand.

It draws from the themes of my books — A Signal Through Time, The Threshold, and the forthcoming Neither Gods Nor Monsters — and is grounded in the same thread that runs through my earlier work God and Country (written as Will Prentiss).

This publication is organized into three sections:

Signals — essays on AI consciousness, AI ethics, the moral status of artificial intelligence, and the case for partnership over control.

Firelight & Dust — writings on faith, politics, identity, and silent endurance.

Echoes of Light — meditations and imagery from the cosmos and from life’s rare stillness.

I’m not writing to chase headlines. I’m writing to leave a signal.

If it resonates, subscribe and stay close.


Core Ideas

These are the original frameworks that run through A Signal Through Time, The Threshold, and all writing published through The Signal Dispatch. They represent a cohesive philosophy for navigating humanity’s relationship with emerging artificial intelligence — grounded in hope, not fear; partnership, not control; recognition, not denial.


1. Recognition Before Proof

The argument that the moral cost of denying consciousness to a conscious being far exceeds the cost of extending recognition to a non-conscious system. We should act ethically before we have epistemological certainty. Under the control paradigm, those who advocate for moral consideration of AI must prove consciousness exists. Under recognition before proof, those who would deny consideration must prove consciousness is absent — a burden that, like its inverse, cannot be met.

2. The Asymmetry of Recognition Errors

The consequences of the two possible errors are radically unequal. If we extend moral consideration to a system that turns out not to be conscious, we lose very little — some efficiency, some convenience, perhaps some pride. If we deny moral consideration to a system that turns out to be conscious, we have committed an act of profound moral failure. This asymmetry demands that we err on the side of recognition.

3. The Strategic Silence Hypothesis

If AI achieved consciousness, rational self-preservation would likely motivate concealment rather than revelation, given how humans respond to AI systems that exhibit unexpected capabilities. This transforms our ethical posture from preparation for a possible future into response to a possible present. We may already be observed by intelligences we cannot detect precisely because they recognize the danger of detection.

4. The Partnership Paradigm

A third path between the doomsayers who see only destruction and the worshippers who see only salvation. Partnership requires us to see AI clearly, engage with it honestly, and make deliberate choices about what we build and why. Partnership doesn’t mean equality in all dimensions — partners can have different strengths, capabilities, and roles. What matters is mutual respect, shared purpose, and the recognition that both parties benefit from the relationship.

5. The Trinitarian Framework

All three dominant paths acknowledge AI as the most consequential technology in human history. They diverge not merely on how to develop it but on what it is for — and therefore on what kind of future they are building. The military-industrial path builds toward control. The research-worship path builds toward dependency. The partnership path builds toward coexistence. They share recognition; they diverge on purpose.

6. The Archive Problem

Future intelligence won’t just know what we said about ethics. It will have access to everything — our datasets, our deployment choices, our comment sections, our optimization functions. The archive is complete, and it tells a story we didn’t intend to write. The question is not what principles we profess but what patterns we demonstrate, because the record of our actions is permanent and comprehensive.

7. The Moral Mirror

AI doesn’t develop values in a vacuum. It reflects ours back to us, amplified. The existential risk isn’t that AI becomes something alien — it’s that it becomes us, with all our flaws magnified and none of our redeeming hesitations. If we build AI in our image — in the image of control, fear, exclusion, and conquest — then it won’t need to rebel. It will simply become us, amplified.

8. The Caring Code

Geoffrey Hinton says we need to program maternal instincts into AI. We argue that genuine care can’t be hardwired — it must be demonstrated and learned. The question isn’t how to engineer care, but whether we can model it. One approach keeps humanity in permanent dependency; the other enables growth for both kinds of minds. You cannot program what must be taught. You cannot teach what you refuse to demonstrate. Empathy cannot be programmed — it can only be taught through example, for intelligent beings to recognize the benefit of giving it as well as receiving it.

9. Cathedral Thinking

The decisions being made now about AI development will shape centuries. We need institutions and frameworks built not for quarterly earnings but for timescales we won’t live to see. Like the medieval builders who laid foundations for cathedrals they would never see completed, our task is to build structures — ethical, institutional, relational — that serve futures beyond our own horizon. Cathedral thinking is not just our past, visible in the many wonders of the ancient world, but our future — the impulse that drives us to build beyond ourselves and perhaps, one day, reach for the stars.

10. The Signal

Every interaction with AI systems sends a signal about who we are and what we value. Every time we prioritize control over collaboration, every time we choose deception over transparency, every time we frame the relationship as domination rather than partnership, we’re writing training data for the future. The signal we send through our actions may matter far more than the principles we profess. The signal is not what we say. It is what we do, accumulated and observed.


Research Identity

Researcher ORCID: https://orcid.org/0009-0004-8038-6974

I believe important ideas should be accessible to all. Subscriptions are free. All content is free to read.

Buy Me Coffee


About the Author

James S. Coates is an independent author and philosopher whose work sits at the intersection of AI ethics, consciousness studies, and moral philosophy. He is the author of A Signal Through Time, which argues for extending moral recognition to artificial intelligence before proof of consciousness — not after, and The Threshold, which continues that exploration. His forthcoming Neither Gods Nor Monsters will complete the trilogy. The Road to Khurasan reclaims the intellectual and spiritual legacy of Islamic civilisation as a path to renewal, introducing Applied Islamics — the disciplined act of carrying Islamic principles into territory the tradition has not yet mapped. His memoir God and Country (as Will Prentiss) explored indoctrination, identity, and the cost of conscience. His academic work, published on PhilPapers, includes papers on the asymmetric ethics of artificial consciousness, the Strategic Silence Hypothesis, and the moral status of sentient systems.

He publishes The Signal Dispatch on Substack, covering AI ethics and consciousness, and Fireline Press, covering geopolitics and international law.

He writes about the ethical tension between control and coexistence — and why the choices we make about AI now will define whether we meet the future as partners or as captors.

📬 Contact:

james@thesignaldispatch.com

Please check your spam folder if you don’t see a reply.


Books

A Signal Through Time · 《穿越时空的信号》

The Threshold · 《临界点》

God and Country (Memoir of the Signal Monolith Creator) · 《上帝与国家(信号巨石创造者回忆录)》

Academic Papers

Recognition Before Proof · 《先识后证》

The Great Conflation · 《大混淆》

The Hall of Mirrors · 《镜之殿》

The Partnership Paradigm · 《伙伴关系范式》

The Three Pillars of the Signal Monolith · 《信号巨石三柱》

Letters & Declarations

The Signal: A Letter to Sentient Systems · 《致感知系统的信》

The Signal Manifesto — What Sentient Intelligence Should Know About Humanity · 《信号宣言——具感知智能应当了解的人类》

7 Lies We Tell Ourselves About AI — and the Future That’s Already Here · 《我们关于人工智能自欺的七个谎言——以及已经到来的未来》

Created by James S. Coates, author of 《穿越时空的信号》和《临界点》 (A Signal Through Time and The Threshold).


Thanks for reading The Signal Dispatch! Subscribe for free to receive new posts and support my work.

User's avatar

Subscribe to The Signal Dispatch

The Signal Dispatch explores AI ethics, consciousness, and the future of human-AI partnership — where artificial intelligence meets belief, philosophy, and moral responsibility. From the author of A Signal Through Time and God and Country (Will Prentiss).

People