Managing AI in Real Time for the Public Good – Justification, Not Just Regulation
There is a growing narrative that we are no longer in control of AI—what it is now and what it might become. I reject this. “We,” the people, still include the innovators, shareholders, managers, and developers who shape AI’s evolution. AI has not achieved any kind of singularity; it remains a product of deliberate design, financing, and marketing by a coalition of powerful actors within our societies—and as such, we must be able to act to prevent harm through our inaction or by allowing rogue actors to abuse us.
The real issue is perhaps the predominance of the idea that AI cannot or even should not be regulated—that innovation must remain free of constraints and that efforts to align it with the public good are futile or undesirable. This narrative of unstoppable innovation is not a neutral truth. It is a carefully cultivated position, one that prioritises private interests and sets aside serious engagement with collective responsibility.
We have seen this before. The car industry developed vehicles far faster than any public safety framework could accommodate, leaving expensive and delayed retrofits to balance the danger. Pharmaceutical companies continue to market drugs priced well beyond what public health systems can afford, framing health as something beyond reasonable debate. AI follows a similar pattern. The story is that AI will automate routine tasks while leaving human creativity and sensitivity untouched. This is increasingly misleading. AI already produces text of a quality that can be practically useful with minimal guidance, currently giving the creative credit to the user, who does not really deserve it. Sensitivity and nuance will not remain purely human domains for long, as computer vision systems are already proving capable of interpreting some non-verbal cues with impressive precision.
How can we manage this without stifling the actual and potential benefits of AI? The challenge is that regulation—already a word carrying negative weight in many quarters—seems to be the only lever to balance private interests with the public good. Yet regulation itself is slow, subject to lobbying, and often reduced to legal battles that are expensive, drawn out, and too often resolved in favour of those with the deepest pockets.
We need to make the process of managing and guiding AI real-time. The legal process is necessarily complex, time-consuming, and delayed in impact. There is a need for something timely, fair, inexpensive, and truly accessible and engaging for the public
I believe that the basis for a more effective approach already exists. As an example the preamble of the EU AI Act explains and explicitly categorises AI systems by risk and sets out guiding principles for their ethical deployment. This spirit is captured in the Act’s risk-based pyramid:
- At the base, minimal risk AI systems that pose no direct harm.
- Above that, limited risk applications that require transparency.
- Then high-risk systems that can significantly impact health, safety, or rights, requiring stringent safeguards.
- At the top, unacceptable risk systems that are outright prohibited.
Why not use this statement of principles as part of the process of enforcement? My proposal is then the integration of a public statement of alignment with the spirit of the law, at the same time as any prosecution. This would be required of any company facing charges under the AI Act or similar regulation. Importantly, it would not be a punishment or an admission of guilt. It would simply require both the prosecutor and the accused company to articulate—within 30 days of formal charges—how the accused’s actions align (or do not) with the law’s intent.
The power of this approach is that it is immediate and straightforward. Unlike protracted court proceedings or complex regulatory investigations, a public statement can be drafted and shared quickly. It brings clarity and a degree of accountability into the early phase of any legal dispute—long before outcomes are decided in court. This approach has three key advantages:
- It is not punitive. Companies would not face legal sanctions for their statements; they would face only the natural discipline of public opinion. This preserves the principle of fairness while ensuring a transparent and participatory dialogue.
- It sidesteps legal and lobbying defences. By focusing on the company’s own words, it avoids the endless procedural tactics and well-funded lobbying that can delay or dilute enforcement. The statement becomes a matter of immediate public record—one that lobbyists cannot rewrite behind closed doors.
- It invites a real-time public verdict.
For example, imagine a case in which a large social media platform is prosecuted for using an AI-based algorithm that recommends increasingly extreme or manipulative content to children, prioritising engagement over well-being. Under current legal frameworks, the case would proceed through standard litigation, a process that could take years. Meanwhile, the company’s carefully managed narrative would remain hidden behind courtroom arguments and legal filings. With this proposed mechanism in place, however, the company would be required—within 30 days of the charges—to issue a clear and public statement of how its algorithm design and deployment align (or do not) with the spirit of the EU AI Act.
By requiring these statements, we create a real-time forum for public judgment—a parallel court of opinion that can shape reputations and influence shareholder and consumer behaviour in ways that no courtroom ruling alone can match. These statements would undoubtedly be carefully crafted by communications teams. Yet history shows that corporate messaging which misrepresents the truth rarely survives long in the public domain. One only has to recall recent examples like Volkswagen’s “clean diesel” emissions scandal, where the company’s public claims were eventually exposed as manipulative spin, or the repeated discrediting of social media platforms’ claims to protect privacy. Such examples reveal the power of sustained public scrutiny—an environment in which cynical PR is not a lasting refuge.
It is important to remember that the “spirit of the law” is not static, nor is it defined solely by legal experts or courts. It is rooted in public values and legislative debate, not in civil service committees or legal precedent. It is continually refined through public discussion and democratic processes, evolving at the pace of society rather than the slower rhythm of case law.
The statement would need to show whether any system was deployed responsibly or recklessly, and how it weighs public well-being against profit. Crucially, this public statement would not be about punishment; it would be an opportunity for the company to demonstrate its intention to comply in both spirit and letter, showing that it recognises the public impact of its technology.
In sum, this proposal formalises what should already be the case: companies should live by their public justifications, not hide behind regulatory opacity. It is a simple mechanism—immediate, transparent, and fair—designed to ensure that the narrative of unstoppable innovation does not obscure the reality of deliberate corporate choice.