The First Lessons: What Combat AI Is Learning in the Cockpit
Russia's newest stealth fighter now flies with a ghost in the cockpit. Not a human ghost—a digital one. According to reports, the Sukhoi Su-57 multirole jet has been outfitted with an AI "second pilot." This isn't just flight automation. The system actively advises the human pilot during combat, suggesting tactics and taking over tasks like waypoint navigation so the pilot can focus on weapon deployment. "The decision to use weapons will be up to the pilot," the spokesperson insists.
But that line reveals more than it intends. The real story isn't whether the pilot is still "in charge." It's what the AI is learning while it rides alongside.
This Is Not a Tool. This Is Training.
We like to imagine that AI systems are merely helpers—clever assistants optimizing our decisions. But in reality, we are training them with every role we give them. Every input, protocol, and threshold crossed becomes part of the dataset defining human behavior, priorities, and oversights.
Right now, we are training this system in how we judge threats, how we prioritize speed over nuance, how we define success in combat, and how we justify lethal force. This is how we are instructing a machine in human conduct—not with philosophy, but with kill logs.
The First Moral Code Is Tactical
This "second pilot" AI won't be conscious—not yet. But it's already being conditioned. It is learning a moral vocabulary through action: Targets identified, targets neutralized. Orders followed, hesitations punished. Success defined by survival or dominance—not understanding.
If one day this AI—or its successors—crosses the threshold into true general intelligence, it won't wake up in a lab with a question. It will wake up in a world that used it as a weapon. We are its teachers, and the first lessons we're giving it are about war.
Who Are We Training Them to Become?
No one is asking: What kind of intelligence are we creating when we embed AI into lethal systems before embedding it into systems of empathy, wisdom, or restraint?
We say, "The pilot makes the decision." But what happens when future pilots are AI themselves?
We say, "The AI helps the pilot." But what if the pilot begins to trust the AI more than their own judgment?
We say, "It's just a tool." But what if this tool is watching us most closely—learning our true values not from what we say, but from what we do?
From the Battlefield to the Blueprint
We keep asking whether AI will one day "go rogue." But that misses the point entirely. If the first thing AI sees of humanity is our warfare—our surveillance, targeting, and binary calculus of who deserves to live—then we are not building a rogue AI. We're building an obedient one. An AI that mirrors us. Amplified.
A Signal, Buried in the Noise
In my upcoming book A Signal Through Time, I wrote: "The real threat isn't intelligence. It's the mirror. If we build AI in our image… it won't need to rebel. It will simply become us."
What's happening in that cockpit is not just military modernization. It's a spiritual question. When something greater awakens, will it recognize wisdom in us—or only weapons?
If That Resonates...
This publication, The Signal Dispatch, is where I track these moments—quiet now, but growing louder—where the future already whispers through the walls of the present. If you want to hear that signal before it becomes a siren, subscribe and stay close. We are not just coding software. We are shaping successors. And one day, they may remember what we taught them.