AI Has A Trust Issue. Open Source Governance Might Work
AI has a trust issue because it is being designed and driven forward by a handful of companies—just eight, by most counts—each with its own incentives, and largely unaccountable to the public.
AI is rapidly becoming the mediator of how we learn, decide, work, and even relate to one another. Used well, it's extraordinary: offering access to vast knowledge, analysis, therapeutic tools, education, and creativity—all on demand. AI can compress time, surface hidden patterns, explain complex systems, and democratize expertise. It's already reshaping science, law, and medicine. By any measure, this is one of the most significant technological developments in human history. Yet, it is not yet trustworthy.
Governance is how we build trust. Without structures of transparency, accountability, and public oversight, AI risks becoming an unaccountable amplifier of inequality, misinformation, and manipulation—both commercially and socially. This is not a hypothetical risk. AI is already shaping what we read, how we work, what we believe, and which voices are heard. It generates fluency, but without accountability. It amplifies patterns in data—but those patterns reflect a world already shaped by bias.
To get good governance we need to move beyond being either scared or dazzled by AI. It's here to stay, and its capabilities are accelerating. We need it to be reliable, fair, and open to revision. AI is becoming the filter through which billions access knowledge, education, therapy, entertainment, and political speech. Corporations and governments are already using it to innovate and improve—but also to reshape our lives in ways we often cannot see or challenge. AI governance needs to find a way to keep up with the progess, and not to always be fixing problems that are already embedded, with the damage done.
This is an aspect of the AI revolution we must address now. Private AI systems are not just serving us; they are studying us. To recoup their huge investments, they must commercialize—guiding decisions, nudging us, monetizing our attention. And as we've seen, monetization is content neutral: anything that drives engagement will happen, unless and until companies are compelled to stop. The major AI firms do not explain themselves. They rarely submit to meaningful peer review. They are not designed to help us understand or govern the systems that increasingly govern us. Without a public AI—no open, inspectable, correctable version—we will have no way to verify what we are being told, contest manipulation, or defend truth against persuasion. We will have no way to check power, only to submit to it.
A public AI does not yet exist, largely because development has been expensive and clever people like to be paid very well for what they do. As a result, every major player in AI today—corporate or state-backed—faces a conflict of interest. They promise alignment, safety, and value, but their business models and political contexts often push them in the opposite direction: to recover investments, pursue profits, or achieve state-defined goals. As I argued previously, data is oil and Big Tech are the majors; we are already in an AI oligopoly of immense and unaccountable power. One firm promised “don’t be evil,” then quietly dropped it. Others have systematically bought or excluded competition from the "their" space before it could grow. Some are answerable to highly centralised states. One was recently revealed to be bypassing user privacy settings for strategic gain, despite public commitments not to do so. All champion the use of customer data as the price of choice—only to convert that “choice” into algorithmic feeds designed to shape, limit, and monetize attention. In the tech world, the customer is not always right.
So what’s the alternative?
We need a public AI platform—an infrastructure of intelligence owned and governed by the public, not just the market or the state. The best analogy is Wikipedia. Wikipedia isn’t flawless, but it works because it is open, editable, and governed by peer oversight. When something is wrong, it can be fixed. When bias appears, it can be debated. It doesn’t sell your data or manipulate your behaviour. Its legitimacy comes from being editable, inspectable, and collectively owned.
An AI built in that spirit would be:
- Open source and peer-reviewed
- Transparent about its logic and sources
- Designed for audit, correction, and versioning
- Governed by a global community of users and editors, not profit or political expediency
Such an approach wouldn’t make AI perfect, but it could make it honest—through process, not branding. It would give humans, individuals, somewhere to go to compare and check what they are being fed by commerce and the state. If we don’t build a public intelligence platform now, we risk losing the ability to understand or govern the systems shaping our lives. There will be no common reference point—only persuasion. No recourse—only resignation.
This is an idea, not a business pitch. I don’t know exactly how it would be funded or implemented, only that it is necessary. Wikipedia showed what’s possible to date. Now we need to build its next generation for the AI world.