ai.go: Data Sovereignty and Functionality Through an Orchestrated Multi-Agent System
In June 2025, Microsoft confirmed under oath before a French Senate committee that the company can no longer provide an absolute guarantee for the protection of data stored in the EU. The reason lies in the U.S. CLOUD Act, which obliges Microsoft to hand over data to U.S. authorities—even if that data is stored exclusively on servers in Europe. This renders the earlier assurance of complete protection for European data obsolete. Access by U.S. authorities remains legally possible, regardless of whether the data is stored within the EU or not. This applies not only to Microsoft but also to other major U.S. providers such as AWS.
At the same time, European companies face another challenge: while the U.S. and China dominate global AI development, European models rank only mid-field in international benchmarks. Companies must therefore meet the highest requirements for data protection and regulatory compliance, while at the same time they want—and need—access to the most powerful Large Language Models (LLMs) to make their AI initiatives successful.
The central question is therefore: How can the use of modern AI models be combined with full data sovereignty and regulatory compliance?
The ai.go Approach
ai.go addresses this challenge with a multi-agent system that is not limited to a single model but uses an orchestrated architecture of multiple AI agents (LLMs). At its core is an orchestration agent that receives user requests, breaks them down into subtasks, and routes them to specialized agents (LLMs). These subtasks may include extracting data from enterprise systems as well as generating programs for data analysis and visualization.

Controlled Use of LLMs
ai.go also leverages the leading LLMs on the market. The key principle, however, is this: any task involving customer data is processed exclusively by models hosted on servers within the European Union and owned by a German company. This effectively prevents the leakage of sensitive information. In addition, sensitive data passes through an anonymization step before entering the processing pipeline.
For generic tasks—such as code generation—high-performance models outside Europe can optionally be used. The decision, however, always rests with the customer, who defines in detail which models are permitted and which are not—even on a task-by-task basis.
A Secure Execution Model
If, for example, an analysis program is generated, its execution takes place in a dynamically created, secure Docker container on European servers owned by a German company. The data is integrated solely as local parameters, ensuring that it always remains fully under the customer’s control.
Conclusion
With this approach, ai.go combines two essential requirements:
- Maximum data sovereignty and control through processing within European infrastructure.
- Optimal functionality through flexible orchestration of the best and most suitable models for each task.
This makes ai.go a solution that bridges the gap between Europe’s strict regulatory requirements and the need for access to state-of-the-art AI technology.