AI’s Battleground: Power, Autonomy, and Society

AI’s Battleground: Power, Autonomy, and Society

The Distraction of Safe Debates
AI gives us plenty to talk about, but are we addressing the most important issues, or only a few? Changes to work and education, plagiarism, creativity, sentience are all important and need debate. They are also safe in that we can see a way through to participating in the outcomes, whether through organised labour, governing our schools, consuming or rejecting the outputs. These topics make AI seem about cultural preferences. Deeper questions remain to be asked such as how will AI affect power itself: the power of business to decide how we interact and pay, the power of states to deploy AI in warfare, and the power to watch, classify, and predict.

AI at War: Secrecy and Normalisation
Public debate can be a front for things already happening; AI in warfare is a clear example. The Terminator vision of humanoid robots is distant and dramatic, yet the real change is quieter and already deployed. Autonomous drones are reported in Ukraine; their on-board autonomy makes them harder to track because they are not dependent on signals after launch. Today AI helps gather intelligence; tomorrow it can carry the final stages of a strike. The step from assisted targeting to full autonomy — including selecting a human target without human intervention — is small.

It is comfortable hypocrisy to talk of waging war “nicely,” of keeping to agreed boundaries that rarely survive the first shot. Major Thomas W. Ferebee, the Enola Gay’s bombardier, did not choose any of the 70–80,000 victims of Hiroshima. Did anyone? This was already death at a moral distance. Before he pressed the release button, consequential decisions were made elsewhere, out of sight and beyond scrutiny. Expect the same with AI: deployment under secrecy, normalisation by small procedural changes, and a politics that prefers escalation framed as necessary defence rather than moral reckoning.

So AI in warfare is not some other future—it is now. The technology will not arrive fully formed after some future provocation. Private firms are already designing and supplying these systems under contract, funded and protected by the language and the infinite deniability of “national security.” While we debate whether to let our children use OpenAI in school, a far more consequential development of AI is already under way.

Warfare exposes the true scale of the AI challenge. This technology reaches into every domain—education, culture, war, entertainment. It is tempting to focus on each and start drafting rules, but we still lack the frame that matters most: how AI will be used by the powerful—politicians, corporations, and criminals alike. The debate cannot remain technical; it must be moral. The question is not what AI can do, but how far we will allow those seeking profit or power to reshape civilisation itself.

The Power of Security and Innovation
The same mindset that justifies secrecy and automation in war already governs our digital lives in peace. Once a technology can be used to kill without oversight, using that same technology to control or manipulate feels less consequential. It is not. These are different theatres of the same doctrine: one lethal, the other psychological, both built on the assumption that control is too important to be shared, and, above all, that ‘we know best’.

The same doctrine runs through advertising, content feeds, and platform design. Data is harvested without consent or compensation. Our preferences and personalities are mined for sales opportunities, not treated as part of a mutual exchange of choice and value. Online sales do not wait to be asked. AI now feeds us rather than serves us, using systems built to capture attention and shape behaviour. Are we consciously changing the nature of ownership—allowing our phones, laptops, and cars to be managed and monetised after purchase—or are we simply living through the slow accumulation of small changes imposed on customers who have stopped paying attention? We are not paying attention, but that does not make it right.

Permission, Not Forgiveness: Redefining Accountability
AI moving onto the battlefield and technology companies turning consumers into inventory are both happening in the shadows. The first is no surprise: secrecy has long been a privilege granted to security and defence forces. In the tech world, no such permission was ever given. Under the cover of innovation and slogans like “move fast and break things,” small firms became giants without ever accepting the responsibilities that come with power. It is one thing to discover how to monetise data; it is another to decide not to price it, denying citizens a meaningful exchange of value when their information is harvested. It is not acceptable to roll back consumer protections painfully built after years of abuse in finance and other sectors, and to redeploy those tactics under new names. Undermining the essence of ownership through software capture rewrites capitalism itself—and not in favour of the public.

Military overreach has long been justified as political necessity. Corporate overreach is now excused as innovation. Both rely on citizens accepting opacity as inevitable, or being numbed into passivity by complex terms and conditions and the relentless drift of thousands of small changes, all in one direction. In both, accountability is post-hoc or performative.

We insist on oversight for the military. The same discipline is needed for the transformations being wrought by big tech in the name of convenience or entertainment. Ask for permission, not forgiveness—and stop using vast resources to pay the fines long after the damage is done

Restoring Democracy’s Role in AI’s Future
Openness is not sentimental; it is constitutional. AI cannot remain a black box into which authority disappears and from which excuses emerge only after something dreadful—or something quietly corrosive of a fair society—has already occurred, and is then found to be impossible to fix.

The trust we place in the military and the freedom we grant to technology companies must not become weaknesses to be exploited. To prevent overreach, the public and its representatives must extend democratic oversight into the balance of power that AI has already shifted. Brilliant individuals chasing their own visions, and shareholders who measure only financial returns, are not enough to control a technology that can choose to end a human life independently, or that can reshape our economies through data harvesting, infotainment, and one-sided, inescapable software subscriptions.

Re-establishing society’s primacy in designing and deploying AI will be difficult. The themes go far beyond cost and convenience; they demand ethics, restraint, long-term vision, and a balance between freedom, innovation, and responsibility. This is what democracy means. And while we may think we have enough to manage with immigration, globalisation, health, and education, AI touches every one of these directly—and adds new challenges of its own.

The obvious danger of AI choosing who to kill is that it might choose to kill you. The subtler danger is realising that you no longer, in any meaningful sense own, your smartphone, your car, or your home; that you are known and steered by corporate algorithms designed not to serve you but to extract—your attention, your money, or your vote.

It is time to talk about AI and power—and to act.