- Deborah Lupton / betterimagesofai.org / CC-BY-4.0
Everyone seems to agree on the fact that we need “AI sovereignty.” But what’s behind this buzzword? What does it mean for those of us who don’t work for large AI companies, i.e., “the rest of us”? A guest article by Nico Görnitz from the MLOX project.
When people talk about AI sovereignty today, it's almost always about scale: countries, industries, or entire economic blocs. The images are familiar: hyperscalers, data centers with thousands of GPUs worth as much as a single-family home, investments in the billions or even more. Sovereignty seems like something that someone else talks about. Something so far away from our own situation that it doesn't affect us. However, this perspective falls short. Most AI applications are not developed in large-scale national projects, but in small teams, research institutions, NGOs, startups, as hobby projects, or even by indie developers. For them, sovereignty is not a geopolitical issue, but a very practical one: Can I understand, operate, and change my systems, or not?
So this is “the rest of us”: individual developers, small startups, social organizations, and research groups. They work on very specific problems: evaluating data, automating processes, supporting decisions. What they have in common are limited resources and a lack of specialized infrastructure teams. At the same time, they are often particularly dependent on functioning, trustworthy infrastructure. If sovereignty is only intended for large players, these groups will be left out, even though they develop a significant portion of socially relevant AI applications.
Public debate tends to focus on models and the computing power required. These naturally form the core of the AI revolution. However, models today are comparatively interchangeable (within certain restrictions) and data can also be migrated (within certain restrictions as well). The real lock-in occurs at the infrastructure level: in deployment and service mechanisms, identity and rights models, monitoring, logging, and costs. Those who do not understand these layers or cannot replace them will lose their ability to act in the long term. AI sovereignty therefore does not mean building everything yourself, but knowing how your own systems are put together and which parts remain interchangeable.
For small teams, sovereignty does not mean maximum self-sufficiency. No one expects them to have their own data centers or be completely self-sufficient. Three points are crucial: First, a fundamental understanding of their own systems. Second, the ability to replace components without having to rebuild everything. Third, realistic exit options in case conditions change.
Open Source is a key requirement for sovereign infrastructure. Transparency, auditability, and collaborative development are decisive advantages. At the same time, Open Source alone does not solve the problem: the list of requirements for AI applications is long, and many projects are fragmented, difficult to operate, and require implicit expert knowledge. Openness with too much complexity, i.e., without usability, remains theoretical. For open source to actually enable sovereignty, integration work, documentation, and clear examples are needed. Work that often remains invisible.
In my own career, from research to industrial applications to open-source projects, I have seen a recurring pattern: models work, demos are convincing, but productive operation fails. Not because of algorithms, but because of infrastructure. This experience was a major reason for me to take a closer look at how AI infrastructure can be designed in such a way that it remains manageable even for small teams. Projects like MLOX arose from this very observation: as an attempt to make existing open-source building blocks more accessible. Because if only large players are able to operate AI systems independently, power shifts. Innovation becomes concentrated, diversity declines. That's why it's not just a technical issue, but a social one.
There is no need for a single platform or a universal solution. Instead, open standards, modular systems, and good documentation are crucial. AI sovereignty does not begin with the number of GPUs, but where systems can be understood, operated, and replaced. When we talk about sovereign AI, we should therefore not only ask how states or corporations can become more independent. Because an AI future that only works for the big players is not a sovereign future.
Nico Görnitz
Dr. Nico Görnitz is a machine learning engineer and data scientist. At Prototype Fund, he is developing MLOX, an open solution that makes AI infrastructure manageable even for small teams and independent developers.