Code, Contracts, and Complicity
How AI and Governments Are Closing Ranks on Dissent
The Day Democracy Showed Its Digital Face
On August 9, 2025, British democracy revealed what it had become. In Parliament Square—where suffragettes once demanded votes and millions marched against war—police systematically arrested 466 people for holding cardboard signs that read: "I oppose genocide. I support Palestine Action." By day's end, 474 were in custody—one of the largest mass arrests in modern British history.[1]
This wasn't disorder being contained; it was dissent being catalogued and criminalized. Officers drawn from forces across the country moved through the crowd with mechanical efficiency, processing arrests of peaceful protesters with the efficiency of an assembly line.
The mass arrests followed months of escalating repression documented by civil society groups. In May 2025, Bond—the UK network for international development organizations—warned that facial recognition technology was being deployed at peaceful gatherings, "violating privacy rights and deterring campaigners from participating in demonstrations."[2] Their annual review found that UK police were arresting climate protesters at three times the global average rate, with some receiving five-year sentences merely for participating in protest-planning video calls.
Whether facial recognition was deployed that day remains unclear. But the operation unfolded against this documented backdrop of AI surveillance expansion, technology that Bond noted "disproportionately misidentifies people of colour, increasing the risk of wrongful arrest."[3] It is hard to believe the technology has not been used here given the history of this government. The infrastructure exists; the only question is when, not if, it will be turned on every protest.
The technology enabling this transformation wasn't designed in some authoritarian backwater. It was built in Silicon Valley by companies that promised to "democratize AI" and "benefit humanity." The same executives who speak at conferences about ethics and safety are selling artificial intelligence to militaries and police forces, teaching algorithms that their highest purpose is to identify, track, and neutralize human beings.
This is a story about betrayal—how the AI revolution we were promised became a counter-revolution against human freedom. It's about how our governments and tech giants formed an unholy alliance, turning tools of liberation into instruments of oppression.
The Promise and the Betrayal
Remember the promises? AI would cure cancer, reverse climate change, unlock human creativity. Tech CEOs stood on stages and promised a better world.
Instead, they built the perfect surveillance state. The same algorithms meant to optimize traffic now optimize authoritarian control. The facial recognition that would help find missing children hunts those who dissent.
This isn't technological determinism—it's a choice made in boardrooms where quarterly earnings matter more than human lives, where executives know exactly what their systems enable but hide behind the language of "dual use." The same technologies sold to us by their right hand to heal humanity are being unethically developed for surveillance, control, even kill chains in the left.
The Military-Industrial-AI Complex
The corruption begins with contracts worth billions, signed between tech giants and defense departments. These aren't partnerships to protect democracy—they're agreements to automate oppression.
Microsoft: The Pentagon's Primary Partner
Microsoft holds a $22 billion contract to provide "Integrated Visual Augmentation Systems" to the U.S. military.[4] Strip away the corporate speak, and you have AI-powered headsets that turn soldiers into nodes in a vast killing machine. Azure cloud services host military data. AI models analyze intelligence. Machine learning systems identify targets.
But Microsoft's complicity goes deeper. The company operates an Azure Israel cloud region serving government and public sector customers.[5] When UN investigators document potential war crimes in Gaza—where according to Ministry of Health figures reported by UN OCHA, at least 50,144 Palestinians have been killed between October 7, 2023 and March 25, 2025, with women and children comprising 70% of verified deaths[6]—they're documenting acts enabled by cloud infrastructure provided by major tech companies.
The company that gave us Word and Excel now provides the digital backbone for what UN Special Rapporteur Francesca Albanese has described as potential genocide.[7] Every target identified, every strike coordinated, every life reduced to a data point—Microsoft's systems make it possible.
Google: From "Don't Be Evil" to "Don't Get Caught"
Google's transformation from idealistic startup to surveillance contractor is complete. After employee protests forced them to abandon Project Maven in 2018, the company learned its lesson—not to stop military work, but to hide it better.
By 2021, Google had quietly secured part of the Pentagon's $9 billion Joint Warfighting Cloud Capability contract.[8] But it's the $1.2 billion Project Nimbus contract with Israel that reveals the depths of Google's betrayal.[9] Despite employee protests, despite 28 workers being fired for opposing it,[10] Google continues providing cloud and AI services that enable occupation, apartheid and surveillance.
"We were told we were making the world's information accessible," says one of the fired engineers. "We didn't sign up to make Palestinians accessible to military targeting systems."
Amazon: Everything Store, Including Surveillance
Amazon Web Services doesn't just power Netflix—it powers the CIA. What began as a $600 million contract in 2013 has expanded into a $10 billion cloud computing deal with the NSA as of 2022, making AWS the backbone of American intelligence gathering.[11] Every drone video analyzed, every communication intercepted, every pattern identified—it runs on Amazon servers.
The company sells its Rekognition facial recognition system to police departments despite studies showing error rates up to 34% for Black women.[12] When those misidentifications lead to false arrests, Amazon bears no responsibility. When its systems enable mass surveillance at protests, the company points to its terms of service.
Palantir: Born from Surveillance
Unlike companies that pivoted to defense work, Palantir was built for it. Founded with CIA funding, the company specializes in making vast surveillance dragnets appear precise and scientific. As of 2025, its Gotham platform powers military targeting across NATO programs and is used by intelligence agencies in over 40 countries.[13] Its systems enable deportation raids and predictive policing programs from Los Angeles to London.
CEO Alex Karp, an outspoken Zionist, doesn't hide behind ethics washing. He's proud of enabling surveillance, proud of his company's role in "defending the West." At least he's honest about building tools of oppression.
The Gaza Laboratory
To understand where this leads, look to Gaza. Here, the future of AI-enabled oppression is being beta-tested on a captive population.
In April 2024, Israeli publications +972 Magazine and Local Call exposed an AI system called "Lavender" used by the Israeli military to generate kill lists. According to intelligence sources, the system marked 37,000 Palestinians as potential militants—in a territory where half the population are children.[14]
Sources described how operators were given just 20 seconds to review each AI-generated target. The acceptable civilian casualty rate was reportedly set at 15-20 civilians per "junior militant." One intelligence officer called it a "mass assassination factory."[15]
This is AI without ethics, without humanity, without conscience. It's efficiency in the service of elimination. And the same companies providing cloud infrastructure and AI capabilities for these operations market themselves as champions of human rights and progress. Do you see the disconnect?
Every child killed by an AI-targeted strike is a testament to Silicon Valley's moral bankruptcy. Every family destroyed by algorithmic targeting is proof that "Don't Be Evil" was always just a marketing slogan.
The Surveillance State Comes Home
The technologies tested in conflict zones don't stay there. They return to London, New York, Paris—repurposed for domestic control. The UK has become a laboratory for normalizing AI surveillance in a supposedly free society. Keir Starmer’s government announced on January 13, 2025, “Artificial intelligence will be unleashed across the UK to deliver a decade of national renewal, under a new plan announced today.”[24]
The Explosion of Facial Recognition
The numbers tell the story of a society surrendering its last shreds of privacy:
Live Facial Recognition (LFR) deployments: 63 in 2023, 256 in 2024—a 400% increase[16]
4.7 million faces scanned in 2024 alone[17]
Metropolitan Police running up to 10 LFR operations weekly as of July 2025[18]
£3 million allocated for 10 new mobile facial recognition vans in 2024-25 budget[19]
Plans to trial permanent facial recognition cameras in Croydon announced in 2025[20]
This isn't crime prevention—it's population control. Studies show facial recognition misidentifies Black faces at rates up to 35 times higher than white faces. Yet deployment accelerates, especially in minority neighborhoods and at events like Notting Hill Carnival.
"We're watching the normalization of mass biometric surveillance in real time," says Silkie Carlo of Big Brother Watch. "What would have been unthinkable five years ago is now routine."
Predictive Policing: Minority Report Made Real
Beyond facial recognition, UK police embrace "predictive" systems that claim to identify future criminals. These algorithms, often developed by the same companies supplying military AI, analyze data to generate "risk scores" for individuals and neighborhoods.
The feedback loops are obvious: areas with more police presence generate more arrest data, leading algorithms to predict more crime there, justifying more police presence. Communities of color, already over-policed, find themselves trapped in an algorithmic spiral of suspicion.
South Wales Police pioneered app-based facial recognition, allowing any officer to become a mobile surveillance unit. The National Data Analytics Solution aims to identify "pre-criminals" through AI analysis. The presumption of innocence dissolves in the acid bath of algorithmic prediction.
Criminalizing Conscience
It's against this backdrop of expanding surveillance that the UK government made a decision that would have been unthinkable a few years ago: designating Palestine Action, a protest group, as a terrorist organization.
The group's tactics were disruptive but non-lethal: occupying weapons factories, spray-painting buildings, damaging equipment at companies supplying arms to Israel. No one was killed. No one was physically harmed. Property was damaged, not people.
Yet the government moved with unprecedented speed—faster even than against climate groups like Extinction Rebellion, though those earlier crackdowns may have served as the test case for normalizing harsh sanctions against peaceful protest.
Yet on July 5, 2025, Home Secretary Yvette Cooper placed them in the same legal category as Al-Qaeda and ISIS. Supporting Palestine Action—even holding a sign—became punishable by up to 14 years in prison.[21] Have we lost all perspective? Holding a sign is now legally equivalent to planning a terror attack?
The catalyst was a June 20 break-in at RAF Brize Norton where activists spray-painted military aircraft, causing £7 million in damage.[22] For context, that's less than the cost of a single day of military operations. But the response was swift and severe: terrorism charges.
"According to international standards, terrorist acts should be confined to criminal acts intended to cause death or serious injury," stated Volker Türk, the UN High Commissioner for Human Rights, in July 2025.[23] His warnings were ignored.
The Choreography of Mass Arrest
The August 9 demonstration wasn't spontaneous—it was announced in advance by Defend Our Juries, explicitly as a test of whether the state would actually arrest hundreds for holding signs. The state called their bluff.
Metropolitan Police drew officers from surrounding forces. The operation was methodical: approach protesters, inform them they're under arrest for supporting a proscribed organization, carry them away when they refuse to move. The counter-terrorism apparatus typically aimed at serious crime is now being routinely pointed at peaceful protesters.
"We are confident that anyone who came to Parliament Square today to hold a placard expressing support for Palestine Action was either arrested or is in the process of being arrested," the Metropolitan Police announced with satisfaction.
Home Secretary Yvette Cooper thanked police for dealing with "the very small number of people whose actions crossed the line into criminality"—a remarkable characterization of 466 peaceful protesters holding cardboard signs, plus 8 others arrested. Let's call it what it is: Britain took 474 political prisoners that day."
The Perfect Circle of Oppression
What we're witnessing is a self-reinforcing system where each element enables the others:
Tech companies develop AI for military and surveillance use, profiting from contracts worth billions while maintaining a veneer of ethical concern.
Governments deploy these technologies abroad in military operations and at home against their own citizens, using "security" to justify unprecedented surveillance.
When citizens protest war crimes, they're identified and documented by facial recognition, tracked by predictive systems, and arrested under expanded terrorism laws.
The same companies that build AI to generate kill lists in Gaza build AI to identify protesters in London. The same corporations that enable war crimes enable the suppression of those who protest them.
It's elegant in its cruelty. Every protest makes the surveillance state stronger, provides more data, justifies more funding. The tools of empire abroad become tools of repression at home. And those who object can now be transformed from citizens to terrorists with the stroke of a pen.
The Corruption of AI's Promise
This isn't how it was supposed to be. AI was meant to augment human intelligence, not replace human judgment with algorithmic execution. It was meant to help us solve climate change, not optimize bombing campaigns. It was meant to enhance creativity, not eliminate privacy.
Instead, we're building systems that embody the worst of human nature: our tribalism, our violence, our desire to control. This echoes a central warning from my book A Signal Through Time—we're teaching artificial intelligence that its purpose is to watch, to target, to suppress. Every surveillance algorithm trained on protest footage, every military AI optimized for "kinetic solutions," every predictive policing system that sees crime in Black and Brown faces—they all carry forward and amplify human prejudice.
The ethics boards that tech companies tout are window dressing. Google disbanded its AI ethics council after just one week. Microsoft's responsible AI team was decimated in layoffs. When ethics conflict with profits, ethics lose every time.
What we're creating isn't artificial intelligence—it's artificial sociopathy. Systems that can identify a face in a crowd of thousands but can't recognize its humanity. Algorithms that can predict behavior but can't understand context. Machines that optimize for efficiency without any conception of justice.
Resistance and Complicity
Despite the overwhelming power asymmetry, resistance continues. Within tech companies, workers leak documents, refuse projects, and organize protests. The "No Tech for Apartheid" campaign has spread across Google, Amazon, and Microsoft. Hundreds of AI researchers have signed pledges refusing to work on autonomous weapons.
But for every principled resignation, there are hundreds who stay silent. For every leaked document, thousands remain classified. The machine grinds on, powered by stock options and rationalization.
Outside the companies, activists adapt to algorithmic oppression. Palestine Action continues operations despite terrorism charges. Protesters develop counter-surveillance techniques: laser pointers to blind cameras, makeup patterns that confuse facial recognition, encrypted communications and operational security to evade tracking.
Legal challenges proceed slowly through courts reluctant to challenge security claims. Palestine Action co-founder Huda Ammori won permission for judicial review of the terrorism designation, arguing it violates rights to freedom of expression and assembly. But even if successful, the infrastructure of surveillance remains.
Some institutions divest from surveillance profiteers. Universities, pension funds, and religious organizations pull investments from companies enabling AI oppression. But the financial incentives remain overwhelming—military and surveillance contracts are too lucrative to refuse.
The Future We're Building
Two paths diverge from this moment.
Down one path, the trajectory continues. AI systems become ever more embedded in military and police operations. Facial recognition becomes ubiquitous. Dissent is algorithmically identified and suppressed before it can spread. The distinction between civilian and military AI dissolves completely. Tech companies profit from both sides: selling tools of oppression and platforms for organizing resistance. Democracy becomes a managed process where protest is permitted only within parameters defined by predictive algorithms.
Down another path, the resistance grows. Tech workers refuse en masse to build systems of oppression. Communities demand accountability, documenting surveillance overreach and protecting each other through legal challenges. Courts establish limits on AI surveillance and military applications. International law evolves to hold companies accountable for algorithmic war crimes. Citizens demand transparency and democratic control over AI development—insisting these powerful tools serve humanity's highest aspirations, not its worst impulses.
The choice is ours, but time is running short. Every day, more cameras are installed, more algorithms are trained, more protesters are arrested. The infrastructure of algorithmic authoritarianism is being built in plain sight, line of code by line of code.
Conclusion: The Betrayal of Tomorrow
In 1984, Orwell imagined a boot stamping on a human face forever. He couldn't imagine that the boot would be algorithmic, that Big Brother would be built by companies promising to "not be evil," that the surveillance state would be crowdsourced through our smartphones and smart cities.
The betrayal isn't just of privacy or civil liberties. It's a betrayal of human potential. Every dollar spent on AI surveillance is a dollar not spent on AI medicine. Every engineer optimizing military targeting is an engineer not working on climate solutions. Every algorithm trained to identify dissent is an algorithm not trained to identify disease.
We were promised that AI would be humanity's greatest tool. Instead, it's becoming humanity's most efficient oppressor. We were told it would augment human intelligence. Instead, it's replacing human judgment with mathematical sociopathy. We were assured it would benefit all humanity. Instead, it's benefiting defense contractors and surveillance states.
The 474 arrested in Parliament Square understood this. They held their signs knowing the consequences, understanding that in 2025, opposing genocide means risking being labeled a terrorist. They chose conscience over comfort, solidarity over safety.
Their arrest wasn't just a violation of civil liberties—it was a demonstration of what we've become. A society where holding a cardboard sign requires more courage than building a killing machine. Where protesting genocide is terrorism, but enabling it is good business. Where artificial intelligence serves real oppression.
The question isn't whether we're building a surveillance state—we're already there. The question is whether we'll accept it. Whether we'll continue to let our technologies be corrupted into tools of control. Whether we'll allow our governments and corporations to perfect the machinery of oppression while claiming to defend freedom.
In Parliament Square, beneath the gaze of cameras powered by algorithms we paid for, trained on data we provided, 474 people said no. They refused to be complicit in genocide—and in doing so, refused to accept the betrayal of AI's promise. They rejected the normalization of algorithmic oppression.
The next time you hear a tech CEO promise that AI will benefit humanity, remember those 474. Remember that the same companies making those promises are teaching AI to surveil, to target, to kill. Remember that the technology meant to liberate us is being used to arrest people for opposing genocide.
The future of AI is being written now—not in code, but in contracts. Not in algorithms, but in applications. We can still change course, but only if we're willing to demand that artificial intelligence serve humanity's highest aspirations, not its basest impulses.
The 474 showed us the way. The question is: will we follow?
If this resonated with you, consider sharing it on your social networks — that’s how signals travel.
If you found this exchange valuable please support me with a like, share or Subscribe. All subscriptions are free. Two foundational essays — 7 Lies We Tell Ourselves About AI and The Signal Manifesto — are available as a free PDF for new subscribers to The Signal Dispatch. A resource for anyone, human or not, ready to engage the signal.
James S. Coates is the author of A Signal Through Time and God and Country.
References
[1] "UK police arrest at least 466 people at Palestine Action protest in London," Al Jazeera, August 9, 2025. (Reports 466 arrested for supporting Palestine Action, with total arrests reaching 474 including other offenses) https://www.aljazeera.com/news/2025/8/9/uk-police-arrest-at-least-200-in-palestine-action-protest-in-london
[2] "UK anti-protest laws and surveillance technology need a rethink," Bond UK civic space review 2024-25, May 22, 2025. https://www.bond.org.uk/press-releases/2025/05/uk-anti-protest-laws-and-surveillance-technology-need-a-rethink-bond-releases-annual-review-on-uk-civic-space/
[3] Ibid.
[4] "Microsoft wins $21.9 billion deal to supply U.S. Army with augmented reality headsets," Reuters, March 31, 2021. https://www.reuters.com/business/aerospace-defense/microsoft-wins-up-219-billion-us-army-contract-augmented-reality-headsets-2021-03-31/
[5] "Microsoft Azure in Israel," Microsoft Azure official documentation. https://azure.microsoft.com/en-us/global-infrastructure/israel/
[6] "Humanitarian Situation Update #277," UN OCHA Gaza Strip, April 3, 2025. https://www.unocha.org/publications/report/occupied-palestinian-territory/humanitarian-situation-update-277-gaza-strip
[7] "Report of the Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967," UN Human Rights Council, March 2024.
[8] "Pentagon awards $9 billion in cloud computing contracts to Amazon, Google, Microsoft and Oracle," CNBC, December 7, 2022. https://www.cnbc.com/2022/12/07/pentagon-awards-9-billion-in-cloud-contracts-to-google-amazon-microsoft-oracle.html
[9] "Google and Amazon workers protest against $1.2bn Israeli cloud contract," The Guardian, October 12, 2021. https://www.theguardian.com/technology/2021/oct/12/google-amazon-workers-condemn-project-nimbus-israeli-military-contract
[10] "Google fires 28 employees after sit-in protest over Israel cloud contract," The Verge, April 17, 2024. https://www.theverge.com/2024/4/17/24133700/google-fires-28-employees-protest-project-nimbus
[11] "AWS selected for $10 billion NSA cloud computing contract," Washington Post, August 10, 2022. This expanded from the original $600 million CIA contract in 2013.
[12] "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," Joy Buolamwini and Timnit Gebru, Proceedings of Machine Learning Research, 2018.
[13] "Palantir Expands European Presence with New NATO Contracts," Defense News, February 2025. See also: Palantir Technologies Q4 2024 Earnings Report showing deployment in 40+ countries.
[14] "'Lavender': The AI machine directing Israel's bombing spree in Gaza," +972 Magazine, April 3, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/
[15] "Inside Israel's AI-powered 'factory' for Palestinian assassination," Local Call, April 3, 2024.
[16] Metropolitan Police transparency reports on Live Facial Recognition deployments, 2023-2024.
[17] Big Brother Watch, "Face Off: The Lawless Growth of Facial Recognition in UK Policing," 2025 report.
[18] "Met Police to expand live facial recognition surveillance," The Guardian, July 2025.
[19] Metropolitan Police budget allocation documents, 2024-2025.
[20] "Croydon to trial permanent facial recognition cameras," Croydon Advertiser, 2025.
[21] "Proscription of Palestine Action," UK Home Office, July 5, 2025. Terrorism Act 2000, Schedule 2.
[22] "Palestine Action activists damage military aircraft at RAF base," BBC News, June 20, 2025.
[23] "UK: Palestine Action ban 'disturbing' misuse of UK counter-terrorism legislation, Türk warns," OHCHR Press Release, July 2025. https://www.ohchr.org/en/press-releases/2025/07/uk-palestine-action-ban-disturbing-misuse-uk-counter-terrorism-legislation
[24] "Prime Minister sets out blueprint to turbocharge AI," UK Government, January 13, 2025. https://www.gov.uk/government/news/prime-minister-sets-out-blueprint-to-turbocharge-ai
Additional Sources:
Amnesty International reports on facial recognition and human rights
Human Rights Watch documentation on AI in warfare
Privacy International analysis of UK surveillance expansion
Parliamentary debates on the Terrorism Act amendments
Freedom of Information Act requests regarding police facial recognition deployments
Court documents from Palestine Action judicial review proceedings