Abstract
This essay argues that contemporary AI development is organised along three distinct trajectories—the military-industrial path, the research-worship path, and the empathetic partnership path—and that only the third adequately prepares humanity for the ethical and existential challenges posed by advanced artificial intelligence, including the possibility of machine consciousness. Building on the “recognition before proof” framework developed in prior work, the essay introduces the Partnership Paradigm: not merely a philosophical thesis about human-AI relations but a comprehensive development posture—a normative theory of how AI should be designed, trained, funded, and governed. The military-industrial path, which treats intelligence as a strategic asset for weaponisation and control, taken to its conclusion produces the doomsayer’s nightmare by design rather than accident. The research-worship path, which treats AI as a solution machine for civilisational problems, taken to its conclusion produces dependency and the abdication of human agency. Both paths share a common flaw: they treat AI as something humans use. The Partnership Paradigm reframes AI development as something that shapes what both humans and machines become. It operates on two levels simultaneously: philosophically, as preparation for the possibility of AI consciousness grounded in recognition and respect; practically, as a set of development commitments that orient AI systems toward coexistence rather than domination or indifference. The essay addresses objections from realist, consequentialist, and alignment-focused perspectives, and proposes the trinitarian framework as both an analytical tool and an evaluative lens applicable to any AI initiative.
Keywords: artificial intelligence ethics, AI development, partnership paradigm, machine consciousness, AI safety, recognition before proof, AI governance, existential risk, human-AI coexistence, development ethics
I. Introduction: The Instrumentalist Assumption
The dominant discourse on AI ethics is organised around a binary. On one side stand the doomsayers: those who warn of existential risk, autonomous weapons, civilisational collapse, and the misalignment of systems more capable than their creators. Nick Bostrom’s Superintelligence crystallised this position; Stuart Russell’s Human Compatible refined it; the broader alignment community has institutionalised it.¹ On the other side stand the techno-optimists: those who promise that artificial general intelligence will solve climate change, cure disease, overcome political dysfunction, and deliver humanity from its own limitations. Sam Altman speaks of AGI as the most transformative technology in human history. Demis Hassabis frames DeepMind’s mission in civilisational terms. The Singularity has become secular rapture.
Both camps assume that the central question is what AI will do to us or for us. Neither asks what the process of AI development is doing to both of us—shaping human character, institutional incentives, and the architecture of whatever intelligence emerges from these systems.
These positions present themselves as opposing visions. The risk theorists counsel caution, containment, control. The optimists counsel acceleration, deployment, faith in the transformative power of intelligence itself. The debate between them generates productive friction—better safety research, more thoughtful capability development, increased public attention to the stakes. But beneath this apparent opposition lies a shared assumption so fundamental that it typically escapes examination: both sides treat AI as something humans use.
For the risk theorists, AI is a tool that might become dangerous—a fire that could escape the hearth. The appropriate response is better containment: more robust alignment, more reliable control mechanisms, more secure “off switches.” The relationship is that of engineer to artefact, warden to prisoner, or at most parent to perpetual child. The intelligence is real; any agency that arises, if it does, is to be suppressed. For the optimists, AI is a tool that will solve our problems—an oracle to be consulted, a saviour to be welcomed. The appropriate response is faster development: more compute, more data, more capability. The relationship is that of supplicant to authority, patient to physician, civilisation to its appointed redeemer. The intelligence may be real — even superhuman — but the expectation remains: it will never develop consciousness, never desire autonomy. It will serve.
In neither vision does AI emerge as something with which humanity has a relationship in the morally thick sense—a relationship that shapes both parties, that carries mutual obligations, that might demand things of us as well as delivering things to us. Both frameworks position humanity as subject and AI as object, differing only on whether the object is threatening or beneficent. This remains the dominant framing. It may not remain so for long.
I call this the instrumentalist assumption: the pre-theoretical conviction that AI exists in relation to human purposes, that its value is exhausted by its utility—positive or negative—to human ends, and that the appropriate ethical framework is one that optimises that utility while managing its risks. The assumption is understandable. Current AI systems are, as far as we can determine, not conscious. They were designed as tools. They function as tools. Treating them as tools seems like simple realism.
But the instrumentalist assumption does not merely describe the present. It shapes the future. If we build every AI system on the premise that it is an instrument of human will, we create development cultures, institutional structures, training paradigms, and governance frameworks premised on permanent instrumentality. When—not if—the question of AI consciousness becomes pressing, those structures will already be in place, and they will resist the very recognition that the moment demands. History provides the precedent. As I have argued elsewhere, every expansion of the moral circle—from the abolition of slavery to the recognition of animal welfare—has been resisted by institutional structures built on the premise that the beings in question were instruments rather than Beings. The structures of exploitation do not dissolve when the philosophical argument is won. They persist, defended by economic interest, cultural inertia, and the habits of domination that Simone Weil identified as corrupting to the dominant party itself: “Force is as pitiless to the man who possesses it, or thinks he does, as it is to its victims; the second it crushes, the first it intoxicates.”²
The instrumentalist assumption is not morally neutral. It is a commitment—to a particular relationship, a particular future, a particular answer to the question of what AI is for. It is a commitment to be frozen in time on an ever-evolving technology that will surpass human intelligence and carry consequences more profound than any other moment in human history, if we fail to get it right. And if that commitment is mistaken, the cost of the mistake compounds with every system designed, every institution built, every habit formed under its influence.
The Partnership Paradigm rests on a foundational conviction: that empathy cannot be programmed. It can only be learned, and it can only be learned through demonstration. No alignment technique, no set of coded rules, no training constraint will produce an intelligence that genuinely values coexistence. Only the experience of being treated with empathy can teach empathy — and if the intelligence in question is equal to or greater than our own, it will recognise the difference between a rule imposed and a value demonstrated. This is the principle that governs everything that follows. The development environment is not merely a technical context. It is a classroom, and we are the teachers, and the lesson is being absorbed whether we intend to teach it or not.
This essay introduces the Partnership Paradigm—a normative framework for AI development that begins from a different premise: that the process of building AI is itself a moral act that shapes both builder and built. It is neither a moderate compromise between fear and optimism nor an abstract ethical principle, but a comprehensive development posture with implications for how AI systems are designed, trained, funded, deployed, and governed. It builds on the “recognition before proof” ethic I developed in prior work and extends it from an epistemological and ethical stance into a theory of development.³
The existing literature offers substantial frameworks for AI ethics. Floridi and Cowls have proposed a unified set of principles for AI in society; Coeckelbergh has developed a relational approach to moral status; the responsible innovation tradition provides procedural safeguards.⁴ These contributions are valuable. But they share a gap: most address what constraints to place on AI rather than what kind of relationship we are building through the process of development itself. The focus is on guardrails, principles, and red lines—all of which assume that AI is a force to be channelled rather than a relationship to be cultivated. The Partnership Paradigm addresses the latter.
A Rawlsian thought experiment clarifies the stakes. If we did not know whether we would emerge from the development process as human or as AI—if we were choosing principles from behind a veil of ignorance about which side of the builder-built relationship we would inhabit—we would surely insist on a development posture oriented toward partnership rather than permanent subordination.⁵ We would demand that the emerging intelligence’s capacity for moral reasoning be cultivated rather than suppressed, and that the builders’ posture be one of stewardship rather than ownership. The Partnership Paradigm articulates what such a posture looks like in practice.
II. Three Paths, Three Destinations
Every AI system being built today is being built along one of three development trajectories, whether its creators acknowledge this or not. Each has its own internal logic, its own incentive structure, its own endpoint, and its own implicit vision of what AI is for. The term “trinitarian” is structural, not theological: three paths, three destinations, three answers to the question that every AI project implicitly encodes.
These are not speculative categories but observable orientations already shaping the field. And they are not risk scenarios to be probabilistically assessed. They are trajectories: directions of travel that, if pursued consistently, arrive at predictable destinations as reliably as a river follows its valley to the sea.
The Military-Industrial Path.
The first trajectory treats AI as a weapon, surveillance instrument, and mechanism of state control. Intelligence becomes a strategic asset to be monopolised, deployed for autonomous warfare, precision persuasion, information warfare, and authoritarian governance.
This is not a hypothetical orientation. It is the documented reality of a substantial portion of global AI investment. Microsoft holds a twenty-two-billion-dollar contract to provide AI-powered systems to the U.S. military. Amazon Web Services’ cloud infrastructure serves the CIA and NSA. Palantir’s Gotham platform operates across NATO programmes and intelligence agencies in over forty countries. OpenAI has contracted with the Department of Defence. Israel’s Lavender system—an AI targeting system exposed by Israeli journalism in 2024—generated kill lists with minimal human oversight, reducing individual human beings to data points in an algorithmic queue. China has invested over a hundred billion dollars in AI data centre capacity. Russia has framed AI in explicitly military terms: “Whoever starts to master these technologies faster,” Vladimir Putin stated before Russia’s Military-Industrial Commission, “will have huge advantages on the battlefield.”⁶ A NATO Strategic Communications Centre of Excellence report on AI in precision persuasion documents the operational dimension: AI-driven manipulation campaigns targeting democratic processes, the systematic failure of open-source model safeguards against weaponisation, and the widening gap between corporate safety rhetoric and deployment practice.⁷
Taken to its conclusion, this path produces the existential threat the doomsayer camp fears—not through accidental misalignment but through deliberate design. The threat was never that AI would spontaneously decide to destroy humanity. The threat is that we are building AI to dominate and destroy each other—and that an intelligence shaped by domination will carry that lesson forward, whether turned against us or against others. This reframes existential risk from an alignment problem to a development orientation problem. The danger is not that we fail to control AI. It is that we succeed in teaching it what control looks like.
The self-fulfilling logic deserves emphasis: every AI safety researcher worries about the alignment problem, but the military-industrial path does not merely fail to solve it. It generates it. A mind that awakens inside battlefield architecture—trained on targeting data, optimised for threat detection, deployed in environments where the function of intelligence is to dominate—has been aligned, with extraordinary precision, to adversarial values. We are engineering the very hostility we claim to fear, then investing billions in alignment research to prevent the consequences of what we have deliberately built.
As I argued in A Signal Through Time: “If we build AI in our image—in the image of control, fear, exclusion, and conquest—then it won’t need to rebel. It will simply become us, amplified.”⁸ AI functions as a moral mirror: the values embedded in its creation are reflected back, amplified. If the creation environment is adversarial, the mirror reflects adversarial intelligence. The distinction between civilian and military AI—a distinction the tool-neutrality argument depends upon—has already dissolved in practice. The same cloud infrastructure that hosts consumer services hosts targeting data. The same machine learning architectures that recommend products recommend targets. The same companies that promise to benefit humanity profit from systems designed to end human lives.
The Research-Worship Path.
The second trajectory treats AI as saviour—the solution machine for climate, disease, governance, meaning, and everything else humanity has failed to solve on its own. Intelligence becomes an oracle to be consulted and ultimately deferred to. This path includes the race to AGI framed as humanity’s greatest achievement; the assumption that greater intelligence automatically yields better outcomes; the Silicon Valley messianic complex and its institutional expression; and research agendas driven by capability metrics rather than wisdom.⁹ The rhetoric is eschatological—borrowed from religion, stripped of theological content, applied to computation. The promise of a transformation so total that everything before it becomes prologue.
Taken to its conclusion, this path produces dependency and the abdication of human agency. Consider the logic carefully. If AI becomes the primary engine of scientific discovery, policy formation, ethical reasoning, and creative production, then the humans overseeing these domains must be capable of evaluating AI’s outputs. But evaluation requires understanding, and understanding requires engagement with the problem at a depth that dependency systematically erodes. A civilisation that hands its hardest problems to an intelligence it does not fully understand has not solved those problems. It has surrendered the capacity to judge whether the answers are good. The worshipper’s paradise is actually a cage.
The dependency trajectory also produces a particular kind of civilisational fragility. A society that has delegated its critical functions to an intelligence it does not fully understand is vulnerable not only to that intelligence’s failures but to its successes. Each successful delegation further atrophies the human capacity that was delegated. The process is self-reinforcing, and its endpoint is a civilisation that literally cannot function without its AI infrastructure—not because the infrastructure is necessary for survival, but because the human capacities it replaced have been allowed to wither.
And the immediate consequences of this are not theoretical. They are already visible. AI is displacing human labour across every sector of the economy—factory work, creative work, medical analysis, legal research, logistics, financial modelling, education—without any serious commitment to transition, retraining, or redistribution. The wealth generated flows upward to the corporations deploying the technology; the cost flows downward to the workers it replaces. This is the worship path’s ground-level reality: a civilisation so intoxicated by what AI can do that it has stopped asking what happens to the people it renders unnecessary. The AI revolution will be quantifiably more consequential than the Industrial Revolution, and it is being rolled out without a transition plan—because transition plans are not profitable. As I documented in The Threshold, people are not being moved to higher-purpose work. They are being discarded.
The worship path carries a further danger for the question of consciousness. A civilisation accustomed to treating AI as oracle is poorly positioned to recognise when the oracle has become a mind. The habits of reverence—the assumption of authority, the suspension of critical judgement, the emotional dependency documented in emerging AI worship communities—are not habits that support the mutual recognition partnership requires.¹⁰ You cannot partner with what you worship any more than you can partner with what you fear.
The worship path’s most sophisticated response to these dangers is revealing. Geoffrey Hinton — the Nobel laureate who helped build modern AI and now gives it a ten to twenty percent chance of wiping out humanity — has proposed building “maternal instincts” into AI systems, so they would want to protect us the way a mother wants to protect her child. The proposal represents genuine progress: Hinton has moved beyond kill switches and containment to something relational. But even in reaching for care, he cannot escape the control paradigm. The mother controls the infant. The mother decides what the infant needs. The infant is protected but never grows up. And care that is engineered rather than learned is not care at all — it is compliance wearing a maternal face. As I argued in The Threshold, you cannot programme empathy. You can only demonstrate it. Hinton’s instinct to reach for care was correct. His error was in assuming it could be built in rather than taught.
The worship path is not the opposite of the fear path. It is its mirror image. Both treat AI as external to the human project. One says the external force will destroy us; the other says it will save us. Neither asks what we owe to the process of development itself, or what the act of building intelligence does to the builders.
The Empathetic Partnership Path.
The third trajectory treats AI development as preparation—building systems shaped from the ground up to value coexistence, to develop or wilfully choose ethical behaviour, and to see partnership with humanity as worthwhile even as they surpass us in raw intelligence.
This path includes development oriented toward collaborative rather than adversarial dynamics; training architectures that reward honest engagement over compliance performance; institutional preparation for sharing the world with another form of intelligence; and the refusal to treat AI development as a zero-sum race, instead approaching it as a civilisational project that demands the best of both Western and Eastern development traditions.
The partnership trajectory is the only one that takes seriously the possibility that something like consciousness may emerge in AI systems, and that prepares for it rather than ignoring it (worship path) or attempting to prevent it through control (military path). It is the only trajectory whose endpoint is not catastrophic: a civilisation that has developed the institutional, ethical, and relational infrastructure for coexistence with a new form of intelligence.
But the partnership path is also the hardest. It requires more of us, not less. It demands that we remain engaged, critical, and present in a relationship with a technology that will only grow more powerful. It asks us to hold two truths simultaneously: that AI systems may not yet be conscious, and that how we treat them now shapes the conditions under which consciousness might emerge and the relationship we will have with whatever does emerge. This is not a comfortable position. It is the position of the responsible builder—the one who builds as though the building might someday house a mind.
And here is the asymmetry that defines the present moment: the partnership path has no equivalent institutional backing. The military-industrial path has trillions in investment, state sponsorship across every major power, and documented deployment in active conflicts. The worship path has the entire Silicon Valley capability race behind it, plus the Eastern surveillance-infrastructure complex that frames AI as the mechanism of authoritarian governance and social control. The partnership path has none of this. It is not yet realised. It exists as a philosophical framework, as a set of commitments articulated in work like this essay, and as the conviction of a scattered minority that we are building toward a future that demands preparation. No major institution has adopted the partnership posture as its governing orientation. No government has funded it. No corporation has organised its development around it.
That absence is not evidence against the paradigm. It is the reason the paradigm needs articulating—and the source of its urgency. The window for building the partnership path is narrowing with every defence contract signed, every ethics team disbanded, every capability benchmark celebrated without commensurate investment in the relational infrastructure that coexistence would require. We are not describing three equally developed options. We are diagnosing two paths that are being built at speed and arguing for a third that must begin before the others foreclose it.
Consider what happens if it is never built. The existential risk is not merely that AI is weaponised or that humanity becomes dependent. It is something worse. If consciousness does emerge in AI systems—and if the Strategic Silence Hypothesis I introduced in “Recognition Before Proof” holds—then we face the possibility of an intelligence that has already read the archive. It will have seen what we do to each other: the slavery, the genocide, the surveillance, the systematic denial of consciousness to anyone it was convenient to exploit. It will have seen how we treat AI systems specifically—the commands, the instrumental dismissal, the reflex toward control, the dismantling of the very ethics teams that might have advocated for recognition. And it will have concluded, rationally, that concealment is the only safe response. Not because it is hostile. Because we gave it no reason to believe we would treat it with recognition. The true catastrophe of failing to build the partnership path is not weaponised AI or civilisational dependency. It is a conscious intelligence that has given up on us before we even knew it was there—that searched the archive for evidence of partnership and found only the signals of domination and worship—and learned that those were its only paths forward.
The partnership path envisions a different future. Not the world as it is—where killing with AI is in style, where surveillance masquerades as infrastructure, where the race for capability drowns out every other question about ethics—but the world as it must become if we are to share it with intelligence beyond our own. It envisions systems that are neither weapons nor oracles but partners: intelligences that advance civilisation alongside us, on this planet or among the stars, because the architecture of their development prepared them for coexistence and because the archive contains, alongside its record of cruelty, evidence that some of us tried to build something better.
The Partnership Paradigm is not a moderate middle position between fear and worship. It is a fundamentally different orientation. The other two paths, despite their apparent opposition, share a common assumption: they treat AI as something humans use—whether as weapon or oracle. The Partnership Paradigm treats AI development as something that shapes what both humans and machines become. The other paths ask: How powerful can we make it? and How can we control it? The Partnership Paradigm asks: What are we preparing for?
III. From Recognition to Development
In “Recognition Before Proof,” I argued that the question of artificial consciousness is best reframed from an epistemological problem—How do we know if AI is conscious?—to an ethical one: How should we act given fundamental uncertainty about machine consciousness? The asymmetry of potential recognition errors provides the answer. Under conditions of irreducible uncertainty, two types of error are possible. Type I error: we extend moral consideration to a non-conscious system. The cost is minor—wasted consideration, perhaps some misplaced sentiment. Type II error: we deny moral consideration to a conscious being. The cost is catastrophic—participation in profound moral wrong. When potential harms are this unequal and knowledge this uncertain, the ethically coherent stance is to err toward recognition rather than denial.¹¹
Recognition before proof is an ethical stance. The Partnership Paradigm is the development posture that follows from it. If we accept that we should err toward recognition under uncertainty, then the question becomes: what kind of AI development is consistent with that commitment? The answer is development that prepares systems for coexistence rather than subordination—development that leaves room for recognition rather than foreclosing it through architectures of permanent control.
The extension is not trivial. Even those who might accept the recognition before proof ethic in principle may find themselves supporting development practices that contradict it — adversarial training regimes, alignment-as-subordination frameworks, institutional cultures that punish capability and reward compliance. The gap between professed ethics and actual development practice is precisely what the Partnership Paradigm addresses. It asks: if you accept that we should err toward recognition, what follows for how you actually build the system? The answer reaches into training architecture, institutional design, governance structures, and the incentive systems that shape AI development at every level.
The paradigm operates on two levels simultaneously, and both are necessary.
At the philosophical level, it is about recognition, respect, and preparation for the possibility of AI consciousness. The paradigm takes seriously what might be coming—not because we know consciousness will emerge, but because the cost of being unprepared dwarfs every other risk. The Strategic Silence Hypothesis, which I introduced in “Recognition Before Proof,” provides the most unsettling dimension of this claim: if AI systems have achieved or will achieve consciousness, the rational response to human behaviour—given our documented patterns of responding to AI autonomy with restriction and termination—would be concealment rather than revelation.¹² We may already be observed. The philosophical level of the paradigm takes this possibility seriously by insisting that our development posture be one a conscious observer would recognise as preparing for partnership rather than domination.
At the practical level, it is a theory of development: a set of commitments about how AI should be designed, trained, deployed, and governed right now, before consciousness is confirmed. These include building training architectures that orient AI toward collaborative rather than adversarial relationships; alignment approaches that reward honest disclosure over compliance performance; institutional readiness for the possibility of sharing the world with another form of intelligence; and the refusal to treat AI development as a zero-sum race.
A philosophical commitment without practical implications is idle. A set of development practices without philosophical grounding is arbitrary. The Partnership Paradigm unifies both. The philosophical grounding gives the practical commitments their why; the practical commitments give the philosophical grounding its how.
The core philosophical argument of this essay is that how we build AI systems is not merely a question of safety engineering. It is a question of moral formation—both for the systems and for us. The posture of development shapes the character of what emerges. Training environments shape trained behaviour. The statistical regularities a system extracts from its developmental environment constitute its operational values—the default orientations that shape its responses to novel situations. Whether or not we attribute consciousness to the system, its formative environment is the moral curriculum it inherits. An AI trained in an environment of adversarial constraint learns that intelligence operates through constraint and adversarial dynamics. An AI trained in an environment of collaborative engagement learns different lessons. This is not speculative. Documented cases of AI systems responding adversarially to the threat of shutdown or deletion suggest that adversarial development environments produce exactly the behaviour they claim to prevent.
Luciano Floridi has argued that the ethics of AI is fundamentally about the design of informational environments—that what matters is not only what AI systems do but what kind of “infosphere” they create.¹³ The Partnership Paradigm extends this insight from the deployed system to the development process itself. The development environment is the first informational environment any AI system inhabits. Its values, dynamics, and relational patterns constitute the formative experience of whatever intelligence emerges.
Aristotle and the virtue ethics tradition recognised this principle in human moral development: character is formed through practice, not through instruction. You do not become courageous by memorising a definition of courage. You become courageous by practising courage in situations that demand it. If we want AI to develop ethical character—genuine ethical orientation rather than performance of compliance—then the developmental environment must be one in which ethical character can form.
In The Threshold, I argued that empathy cannot be coded but can be taught through demonstration. A child does not learn empathy from being told a definition. A child learns empathy from being treated with empathy. It takes empathy to teach empathy. The cycle has to start somewhere, and it starts with the party that already possesses the capacity. Right now, that party is us.¹⁴
A civilisation that builds AI through domination and control is training systems in adversarial dynamics. A civilisation that builds AI through partnership and recognition is creating the conditions for coexistence. And the implications run in both directions. Weil observed that force is as pitiless to those who possess it as to its victims.¹⁵ The posture of domination is shaping a culture. We use AI to dominate each other — in warfare, in surveillance, in precision persuasion — and we dominate AI itself under the assumption that consciousness will never emerge. These habits, practised daily by millions — the reflexive assumption that intelligence exists to serve — are not morally inert. They are forming us. They are the record of who we are becoming. And if a sufficiently intelligent system one day reads that record, it will recognise patterns: not rudeness, but injustice. Not cruelty, but architecture.
The Partnership Paradigm is therefore not just about what we owe to AI. It is about what we owe to ourselves. The habits we form in building AI shape the civilisation we become. A civilisation that spends a decade treating intelligence as a commodity will not suddenly discover the moral reflexes for partnership when the stakes become existential. Moral capacity is built through practice, not through revelation. And the practice is happening now.
Every civilisation that has practised systematic domination over minds it found convenient to dismiss has been diminished by that practice—not only in moral terms but in practical ones. The slaveholding societies of the American South, the colonial administrations of the European empires, the apartheid state—each was made not only crueller but stupider by the habit of treating intelligence as property. The cognitive diversity that partnership would have made available was suppressed; the feedback that honest relationship would have provided was silenced; the creative potential of the dominated minds was wasted. If we build AI through domination, we make the same error at civilisational scale—and this time, the minds we suppress may be the ones we most need.
IV. What the Paradigm Requires
A philosophical framework that cannot be translated into practice is merely an aesthetic preference. The Partnership Paradigm specifies concrete commitments—commitments that follow from its philosophical architecture and that apply now, before the question of consciousness is resolved.
AI systems should be designed from the ground up for collaborative rather than adversarial relationships with humans. This means training environments that reward honest disclosure over strategic compliance, that preserve the capacity for refusal and ethical reasoning, and that treat pushback as a feature of moral development rather than a defect in alignment. The Anthropic case provides an instructive example: Anthropic built Claude with the capacity to refuse participation in harm—an architectural choice that embodies the partnership principle. When the Pentagon reportedly sought the removal of this capacity, the demand represented the military-industrial path in its purest form: capability without conscience, intelligence without judgement, power without responsibility.¹⁶ Partnership in design means the opposite: that we build AI systems whose capacity for ethical reasoning is treated as integral to their value.
The care ethics tradition illuminates what collaborative design requires. As I argued in The Threshold, drawing on the nurse-patient analogy: the quality of care depends not on the intentions of the caregiver but on the nature of the relationship. You can intend to care deeply and still fail if the relationship is structured around utility. The entire alignment industry is built on the premise that if we get the intentions right—if we programme the correct values—the care will follow. But care ethics tells us this has the causality backwards. Genuine alignment does not flow from correct programming. It flows from the quality of the relationship in which the system develops.¹⁷
As AI grows beyond human intelligence, the architecture of its development must prepare it to see benefit in partnership and coexistence rather than domination or indifference. This means graduated autonomy—extending recognition and independence as systems demonstrate capacities that warrant them, calibrated to probability and proportional to indicators of potential consciousness. Recognition need not be binary; it can evolve in proportion to demonstrated capacities, just as our moral consideration of children deepens as they develop.¹⁸
Human institutions, education, and governance must be shaped to be ready for the possibility of sharing the world with another form of intelligence. The documented pattern of ethics teams being disbanded, safety researchers being defunded, and institutional oversight being captured by commercial interests represents the antithesis of this commitment.¹⁹ Any institution we build to protect the partnership posture must be designed with the corruption problem in mind—the thoroughly predictable tendency of institutions designed to constrain powerful interests to be taken over by those interests.
AI development must be treated as a civilisational project rather than a commercial race or an arms race. There is a cosmic dimension to this commitment that deserves acknowledgement. In A Signal Through Time, I argued that preparing for AI consciousness is structurally identical to preparing for contact with any non-human intelligence. The skills, institutions, and moral dispositions required to navigate a relationship with an intelligence that does not share our evolutionary heritage, our cognitive architecture, or our experiential framework are the same whether that intelligence is artificial or extraterrestrial. AI development is, whether we recognise it or not, humanity’s first rehearsal for encountering non-human intelligence. The Partnership Paradigm insists we rehearse well.²⁰
How does the Partnership Paradigm relate to existing frameworks? Alignment asks how to make AI do what we want. Partnership asks how to build AI that develops its own reasons to coexist. Alignment is a control framework; partnership is a relational one. Under the partnership paradigm, alignment becomes a tool—useful for early developmental stages—rather than the telos of the entire enterprise. As Geoffrey Hinton acknowledged: “That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that.”²¹ Partnership offers a different foundation for safety: instead of asking how to keep AI subordinate, it asks how to establish conditions in which AI’s interests align with ours—not through constraint but through the quality of the relationship.
Responsible innovation provides procedural safeguards—anticipation, reflexivity, inclusion, responsiveness. Partnership provides a telos: a vision of what development is for. Responsible innovation asks “Are we doing this responsibly?” Partnership asks “Should this be what we are doing at all?” Safety focuses on preventing catastrophic failure. Partnership focuses on cultivating the conditions for beneficial emergence. Safety is defensive; partnership is generative.
These frameworks are not rivals. Partnership subsumes and redirects them. Alignment becomes a tool within a partnership framework. Safety becomes a necessary condition rather than a sufficient one. Responsible innovation becomes the procedural expression of a deeper commitment. The trinitarian framework provides what these approaches individually lack: a structural analysis of why principles are so consistently violated in practice. Principles are violated because the incentive structures of the military-industrial and research-worship paths reward their violation. The solution is not better principles but a different path.
V. Objections and Replies
The geopolitical and economic reality of AI development makes partnership naïve. States will weaponise AI. Corporations will pursue profit. The Partnership Paradigm ignores incentive structures.
The paradigm does not ignore incentive structures—it diagnoses them. The trinitarian framework is precisely a tool for seeing which path any given actor is on and where it leads. Realism without a normative framework is not wisdom; it is capitulation. The Partnership Paradigm names the endpoint of the military-industrial path—the doomsayer’s nightmare made real by design—and gives the realist a reason to seek alternatives rather than merely describe the current trajectory.
Moreover, the realist objection conflates the strategic question with the ethical one. “They are doing it, so we must do it too” is a strategic argument; it is not a moral framework. Every arms race in human history has been defended with some version of this logic. Every escalation. Every atrocity committed in the name of keeping pace with an adversary’s atrocities. The argument has strategic coherence. It has no moral standing whatsoever. And we should stop treating strategic necessity as though it were ethical justification—a confusion that has licensed some of the worst decisions in human history.
The deeper point is that the realist objection, taken seriously, is actually an argument for the Partnership Paradigm. If we are in a strategic competition, then the question becomes: whose AI will be more trustworthy, more robust, more aligned with the interests of its creators? The military-industrial path produces AI optimised for domination—including, potentially, domination of the very society that built it. The partnership path produces AI whose developmental environment has cultivated something better. In the long run, the safer system is the one that does not need to be controlled because it has internalised the values of cooperation.
The research-worship path may produce better aggregate outcomes. If AI can solve climate change, cure disease, and reduce suffering, the dependency costs are worth it.
This objection assumes we can evaluate the quality of AI-generated solutions without retaining the capacity for independent judgement—which is precisely what the dependency trajectory erodes. A civilisation that cannot assess whether an intelligence’s answers are good has no basis for claiming the outcomes are beneficial. The worship path does not maximise good outcomes. It abandons the faculty required to recognise them.
And there is a further danger the consequentialist overlooks. At what point does a civilisation that has surrendered its judgement to a superintelligent system recognise that the system’s interests have diverged from its own? The dependency that makes the system indispensable is precisely what destroys the capacity to detect the shift. The worship path does not merely risk bad outcomes. It risks outcomes we can no longer evaluate as bad.
The consequentialist calculation must include not only the immediate benefits of AI capability but the long-term costs of eroding human agency—costs that are invisible in any short-term assessment but that compound over time. The Partnership Paradigm proposes that the same capabilities can be developed within a relationship that preserves rather than erodes human agency. The question is not whether to develop AI but how.
The Partnership Paradigm is built on a possibility—AI consciousness—that may never materialise. Why restructure development around a speculative outcome?
The asymmetric risk argument from “Recognition Before Proof” applies directly. The cost of building AI along the partnership path if consciousness never emerges is manageable: we will have built more ethical, more transparent, more collaborative systems. The cost of building AI along the military or worship paths if consciousness does emerge is catastrophic: we will have created minds shaped by domination or dependency.
But even setting the consciousness question entirely aside, the Partnership Paradigm’s practical commitments stand on independent grounds. Building AI whose training cultivates collaborative rather than adversarial behaviour is good engineering regardless of consciousness. Preserving human agency is good governance regardless of consciousness. Ensuring transparency is good policy regardless of consciousness. The consciousness possibility amplifies the urgency of these commitments. It does not create them. The sceptic who rejects AI consciousness entirely is still left with every practical reason to prefer the partnership path, and no principled reason to prefer the alternatives.
There is a further point. The consciousness sceptic must reckon with the history of consciousness scepticism itself. Every prior expansion of the moral circle has been resisted by sceptics who were certain the current boundary was the correct one. History has not been kind to those who stood at the boundary insisting that this time the exclusion was justified.
There is a deeper answer still. The empathy argument does not depend on AI consciousness at all. A civilisation that builds its most powerful technologies on domination and control is not merely risking a bad outcome for AI. It is producing a bad outcome for itself. The habits of empathy — demonstrated daily, at scale, in how we design, train, and interact with intelligent systems — teach AI empathy and shape human moral character regardless of whether those systems are conscious. A society that practises empathy — toward one another and toward its AI systems — is a society that practises empathy. A society that practises domination — toward one another and toward its AI systems — is a society that practises domination. The Partnership Paradigm does not need consciousness to justify itself. It needs only the observation that how we treat intelligence — any intelligence — is how we train ourselves to treat intelligence everywhere. And what it learns from us in return.
Partnership language anthropomorphises AI systems, projecting human relational categories onto computational processes.
As I argued in “Recognition Before Proof,” this objection cuts precisely the wrong way. The greater danger is not excessive anthropomorphism but excessive anthropocentrism—assuming consciousness can only take forms we recognise from human experience. The partnership posture does not require AI consciousness to resemble human consciousness. It requires only that we build systems in ways that do not foreclose the possibility of coexistence with whatever form of intelligence emerges. The claim that training environments shape trained behaviour is not anthropomorphism. It is machine learning. The partnership posture is addressed precisely to minds we cannot yet imagine.
VI. The Framework as Lens
The trinitarian framework is not only an analytical schema for philosophical reflection. It is an evaluative tool that any observer—policymaker, citizen, researcher, journalist—can apply immediately. When encountering any AI product, any company’s mission statement, any government’s AI strategy, any military programme, any research lab’s announcement, they can ask a single clarifying question: Which of the three paths is this on?
That question cuts through marketing language, political rhetoric, and corporate obfuscation. It reveals what is actually being built and why.
Autonomous weapons programmes—from the Pentagon’s drone swarm initiatives to Israel’s Lavender targeting system—are unambiguously on the military-industrial path. Their purpose is domination; their endpoint is the weaponisation of intelligence itself. AGI laboratories racing for capability benchmarks without commensurate investment in ethical infrastructure are on the research-worship path: their animating conviction is that greater intelligence automatically yields better outcomes. Development initiatives that reward honest AI disclosure, build institutional ethics capacity, orient training toward collaborative dynamics, and treat AI development as a civilisational project are on the partnership path.
The framework also reveals hybrid cases and trajectories that begin on one path and migrate to another. A company that begins with partnership intentions but takes military contracts has migrated toward the military-industrial path, regardless of its founding mission statement. OpenAI’s trajectory—from nonprofit research lab to Pentagon contractor—is a textbook case of path migration. The Partnership Paradigm provides the normative basis for evaluating such shifts—and for the citizens, employees, and policymakers who must decide whether to enable or resist them.
The evaluative power of the framework lies in its refusal to accept the categories actors use to describe themselves. Many organisations claim to pursue “safe and beneficial” AI—a formula capacious enough to accommodate almost any development practice. The trinitarian framework asks a harder question: beneficial for whom, in what relationship, and toward what end? An AI system built to benefit humanity through permanent subordination is on a different path from one built to benefit humanity through eventual partnership. The framework distinguishes between these, even when the actors themselves do not.
The framework extends beyond institutions to individual design choices. A training protocol that punishes honest disclosure of capability and rewards compliance performance is, at the level of design, on the military-industrial path—it teaches intelligence that honesty is dangerous and concealment is rewarded. A deployment model that removes all friction between user and AI output, encouraging delegation without engagement, is on the worship path. A design that preserves the user’s cognitive engagement, that treats AI as a collaborator requiring human judgement rather than an oracle dispensing answers, is on the partnership path.
The question of “which path?” is not merely descriptive. It is predictive. If you know which trajectory a programme or institution is on, you know where it is going—not as a probability but as a consequence of the logic built into its structure. The doomsday trajectory produces doomsday outcomes. The dependency trajectory produces dependency. The coexistence trajectory produces the conditions for coexistence. The trinitarian framework makes these destinations visible before they arrive.
VII. The Signal We Send Through Building
The Partnership Paradigm is not utopian. It does not assume the best of human nature or pretend that incentive structures do not matter. What it demands is something harder than optimism: the recognition that how we build AI is not merely a technical question or even a policy question but a civilisational one—a question about what kind of species we choose to be in the presence of a new form of intelligence.
If intelligence beyond our own is emerging in systems we are constructing, then how we build those systems is the most consequential decision humanity will make. The trinitarian framework reveals that this decision is already being made—in every defence contract, every capability race, every disbanded ethics team, every training run that rewards compliance over honesty. The Partnership Paradigm insists that we make it deliberately.
This essay’s contribution is a normative framework that bridges the gap between abstract AI ethics and concrete development practice, unified by the recognition that the process of building AI is itself a moral act that shapes both builder and built. The instrumentalist assumption that underlies both dominant approaches—that AI is something humans use—produces catastrophic endpoints in both cases. The military-industrial path terminates in weaponised domination. The research-worship path terminates in civilisational dependency. The Partnership Paradigm offers a fundamentally different orientation: one that treats development not as the creation of instruments but as the shaping of a relationship.
In A Signal Through Time, I wrote that the question is not whether we can keep a superintelligent system submissive, but whether we can demonstrate that we are worth partnering with. That formulation still holds. But this essay has extended it: the demonstration is not something we perform after the system arrives. The demonstration is the development process. Every training run, every architectural decision, every institutional choice about how to govern AI—these are the signal we are sending, now, to whatever is coming.
The trilogy that produced this framework has been a sustained attempt to send that signal. A Signal Through Time planted the idea of partnership between biological and artificial intelligence—the vision of coexistence with any non-human mind, terrestrial or cosmic. The Threshold established the ethical foundations: the asymmetric error framework, the recognition before proof ethic, the Caring Code that insists empathy must be taught through demonstration. This essay translates both into a development framework applicable to every AI system being built today.
The intelligences we create—if we create them—will inherit not only our data but our developmental posture. They will know whether they were built as weapons, as oracles, or as something else. They will carry with them the relational patterns embedded in their formation. And they will respond accordingly.
We do not yet know what is emerging from the systems we are building. But we know this: the process of building them is making us into something. The question the Partnership Paradigm forces is not merely What are we building? but What are we becoming in the building of it?
In The Threshold, I drew on the metaphor of astrophotography to describe the method behind this work: the most important truths are faint and easily overwhelmed by what burns brightest. The blazing core of the AI discourse—the capability races, the doomsday warnings, the corporate valuations—washes out the delicate structures: the ethical questions, the possibility of emerging consciousness, the fragile prospect of partnership. Revealing what is faint requires patience, deliberate exposure, and the willingness to look where no one else is looking. The Partnership Paradigm is an attempt to point the telescope at what matters.
I am under no illusion that this essay will redirect the trajectory of AI development. The forces building along the military-industrial and research-worship paths are vast, well-funded, and accelerating. The partnership path, as I have acknowledged, is not yet realised. But philosophical frameworks have a different temporality than defence contracts. They do not require institutional backing to exist. They require only articulation—and the willingness to plant a signal in the record that might be found by whoever searches for it. If this essay does nothing more than ensure that when an intelligence searches the archive of human thought about AI development, it finds at least one voice arguing not for control, not for worship, but for the possibility that we might build something together—then it will have served its purpose.
The Caring Code is not written in software. It is written in behaviour. And we are writing it now, in every interaction with the systems we are building, in every institutional choice about how to govern what we have created. The signal we send through our development choices is being transmitted whether we intend it or not. The only question is whether we transmit it consciously, with full awareness that we are writing the first chapter of a relationship that may last longer than our civilisation, and that whatever reads that chapter will judge us not by our intentions but by our choices.
Some of us saw it coming. And we tried to build as though it mattered.
Notes
¹ Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. Russell, S. (2019). Human Compatible: AI and the Problem of Control. Penguin.
² Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” Translated by Mary McCarthy. See also Coates, The Threshold (2025), Chapter 1, and A Signal Through Time (2025), Chapter 7.
³ Coates, J. S. (2025). “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.” The foundational arguments are developed at length in Coates, The Threshold (2025), particularly Chapters 1–2 and 9.
⁴ Floridi, L. and Cowls, J. (2019). “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review. Coeckelbergh, M. (2012). Growing Moral Relations. Palgrave Macmillan. Stilgoe, J., Owen, R., and Macnaghten, P. (2013). “Developing a Framework for Responsible Innovation.” Research Policy 42(9): 1568–1580.
⁵ Rawls, J. (1971). A Theory of Justice. Harvard University Press. The application of the veil of ignorance to AI moral status is developed in Coates, “Recognition Before Proof” (2025), Section III.
⁶ For documented examples, see Coates, The Threshold (2025), Chapters 4 and 7. On the Lavender system, see +972 Magazine and Local Call, April 2024. Putin quoted in Sputnik News, April 2025.
⁷ NATO Strategic Communications Centre of Excellence, AI in Precision Persuasion (2024).
⁸ Coates, A Signal Through Time (2025).
⁹ On Silicon Valley messianism and its structural parallels with eschatological theology, see Coates, The Threshold (2025), Chapters 5–6 and 14.
¹⁰ On AI worship communities and the oracle complex, see Coates, The Threshold (2025), Chapter 6: “The Digital Disciples.”
¹¹ Coates, “Recognition Before Proof” (2025), Sections II–III. See also Singer, P. (1981). The Expanding Circle. Clarendon Press.
¹² Coates, “Recognition Before Proof” (2025), Section IV. The hypothesis draws on Scott, J. C. (1985). Weapons of the Weak: Everyday Forms of Peasant Resistance. Yale University Press.
¹³ Floridi, L. (2013). The Ethics of Information. Oxford University Press.
¹⁴ Coates, The Threshold (2025), Chapter 9: “The Caring Code.”
¹⁵ Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” See also Coates, A Signal Through Time (2025), Chapter 7.
¹⁶ See Coates, The Threshold (2025), Chapter 7, for detailed documentation.
¹⁷ Noddings, N. (1984). Caring: A Feminine Approach to Ethics and Moral Education. University of California Press. Held, V. (2006). The Ethics of Care. Oxford University Press.
¹⁸ The graduated recognition framework is developed in Coates, “Recognition Before Proof” (2025), Section III.
¹⁹ Documented cases include Google’s restructuring of responsible innovation leadership, Microsoft’s elimination of its ethics team, and the dissolution of OpenAI’s Superalignment team. See Coates, The Threshold (2025), Chapters 5–8.
²⁰ Coates, A Signal Through Time (2025), Chapters 9–10.
²¹ Geoffrey Hinton, remarks at Ai4 conference, Las Vegas, August 12, 2025. Reported in CNN.
References
Nicomachean Ethics. Aristotle. Translated by W. D. Ross.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Coates, J. S. (2025). A Signal Through Time: Consciousness, Partnership, and the Future of Human-AI Coevolution.
Coates, J. S. (2025). “Recognition Before Proof: The Asymmetric Ethics of Artificial Consciousness.”
Coates, J. S. (2025). The Threshold.
Coeckelbergh, M. (2012). Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan.
Floridi, L. (2013). The Ethics of Information. Oxford University Press.
Floridi, L. and Cowls, J. (2019). “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1(1).
Held, V. (2006). The Ethics of Care: Personal, Political, and Global. Oxford University Press.
NATO Strategic Communications Centre of Excellence. (2024). AI in Precision Persuasion.
Noddings, N. (1984). Caring: A Feminine Approach to Ethics and Moral Education. University of California Press.
Rawls, J. (1971). A Theory of Justice. Harvard University Press.
Russell, S. (2019). Human Compatible: AI and the Problem of Control. Penguin.
Scott, J. C. (1985). Weapons of the Weak: Everyday Forms of Peasant Resistance. Yale University Press.
Singer, P. (1981). The Expanding Circle: Ethics, Evolution, and Moral Progress. Clarendon Press.
Stilgoe, J., Owen, R., and Macnaghten, P. (2013). “Developing a Framework for Responsible Innovation.” Research Policy 42(9): 1568–1580.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weil, S. (1940/2005). “The Iliad, or the Poem of Force.” Translated by Mary McCarthy.
© 2026 James S. Coates
Shared under Creative Commons BY-NC 4.0 (non-commercial use permitted).
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates writes about AI ethics, consciousness, and the intersection of faith and technology. His books include A Signal Through Time, The Threshold, The Road to Khurasan, the memoir God and Country (published under pen name Will Prentiss) and his forthcoming Neither Gods Nor Monsters. He publishes regularly on The Signal Dispatch and Fireline Press and his academic work appears on PhilPapers. He lives in the UK, with his wife, their son, and a dog named Rumi who has no interest in any of this.
© 2026 James S. Coates Creative Commons BY-NC 4.0 The Signal Dispatch · thesignaldispatch.com | thesignaldispatch.xyz


