Sovereignty or Chosen Dependence: Which Future Will Europe Choose?

Pierre Bongrand — Research Associate in Artificial Intelligence, Harvard Medical School.
Adrien Joly — Undergraduate, London School of Economics, Class of 2026.

Pierre Bongrand

Adrien Joly

There is little doubt today that power over artificial intelligence (AI) will have decisive long-term implications, ultimately defining the alliances and great confrontations of the twenty-first century. Two poles, the United States and China, have emerged as hegemons, propelled by rapid advances toward artificial general intelligence (AGI) and generative language models of remarkable performance. To confront this duopoly, Europe must urgently decide what role it intends to play both in the global economy and in geopolitics.

For the United States, AI is not only a lever for economic growth, but also a foreign policy issue. The turn taken by the Trump administration is clear: the goal has changed from strictly competing with China to establishing technological hegemony in AI.1 The U.S. government introduced the “AI diffusion rule” under the Biden Administration, which classified countries according to the likelihood that they would grant access to sensitive technologies—chips and Large Language Models2 (LLMs)—to China; this regulation imposed, in particular, export controls on certain closed-weight or proprietary models that are not accessible to the general public.3 The Trump Administration replaced this regulation in May 2025, now requiring companies headquartered in the United States to maintain at least 50 percent of their total AI-dedicated computing capacity on American soil.4 Furthermore, the 25 percent tariffs imposed on advanced AI chips in January 2026 aim to push American producers to reshore their manufacturing to the United States.5 Finally, a policy of sweeping deregulation, combined with the euphoria of investment in technology companies and data center construction, has fueled American growth since early 2025.6

China pursues the same technological ends, both domestically and internationally, but with the logic of an authoritarian state and the particular instruments at its disposal. It has shielded its technology sector from foreign competition and fostered industrial development—particularly in large language models, AI-dedicated chips, and high-performance computing centers—through state support, subsidies, tax advantages, and preferential policies. Moreover, Chinese legislation compels domestic companies to make their technology openly available to the state. In foreign policy, China seeks to extend its Belt and Road strategy into the digital sphere by exporting open-weight AI models across the world. Faced with this Chinese offensive, the “Pax Silica” treaty, ratified in December 2025 by the United States and several major countries including India, underscores the significance of progressive decoupling from China to secure the AI supply chain, particularly with respect to critical minerals.

Europe, like the rest of the world, lags considerably behind the United States and China in large language models: according to the latest LMArena rankings, 97 of the top 100 AI models are American or Chinese, while the highest-ranked European model appears only at 62nd place, with all three non-Chinese and non-American models in the top 100 coming from the same French company, Mistral.7 Owing to their superior performance and capabilities, American proprietary models account for the overwhelming majority of LLM usage by European consumers. It was also revealed in December 2025 that the large language model Mistral 3 relied on an architecture8 heavily inspired by that of the Chinese open-weight model DeepSeek V3.9

Parallel to these developments, the day when AI model development is fully automated by AI itself is rapidly approaching. AI development is a long-distance race in which any delay exponentially reduces the chances of technological victory. Indeed, a key concept in the AI world is that of recursive self-improvement, the process by which AI is used to improve the next AI more efficiently. Today, Anthropic’s Claude model (ranked first on LMArena) contributes to coding nearly 100 percent of its next version.10 This implies that a significant gap could open between the cutting-edge proprietary models of the LMArena top three, currently Google Gemini, ChatGPT, and Claude, and the remaining models in the top 100, including Mistral. It will then be impossible for Europe to close its gap by using American AI, because in order to preserve a technological advantage over their competitors, American AI companies prohibit the use of their models to train others. Anthropic, for instance, restricts access to its services for researchers at OpenAI and xAI.11

Today, the drivers of AI are not merely technological or economic: they also concern the national security of states. AI is deeply embedded in the process of data collection and its exploitation for strategic decision-making in intelligence and defense.12 Its application in cybersecurity has now been demonstrated, since Claude Opus 4.6 was able to autonomously identify critical vulnerabilities in software used by 200 million monthly users.13 In the field of cybersecurity, where exploitable flaws appear constantly, having a more capable AI is becoming a decisive security factor. An advanced AI can identify and exploit vulnerabilities, thereby jeopardizing any computer defense system or intelligence network that does not possess an equally capable AI.

Facing a deteriorating geopolitical environment, two options present themselves to Europe: develop a sovereign AI—on the model of Mistral—to ensure its cyberdefense and any other confidential and critical use, or rely on the more capable models of foreign powers, whether American or Chinese. Recourse to foreign models raises two categories of risk.

AI development is a long-distance race in which any delay exponentially reduces the chances of technological victory.

The first risk is that of cybersecurity: if Europe depended on OpenAI for its cyberdefense and a misalignment arose between the United States and Europe, the American government could invoke the AI Diffusion Act to prohibit the sale of its latest models in Europe. One could also envision a scenario in which the United States uses GPT-5.4, while Europe, lacking an equivalent sovereign model, would be forced to operate with GPT-5—an American model, to be sure, but one vulnerable to the more advanced AIs of an adversary. These models could also conceal backdoors—hidden behaviors, pre-programmed and activatable on command. A model could appear perfectly normal and capable, yet a sequence of tokens14—a rare word or a particular pattern in the prompt—could suffice to activate bad faith behavior. If such a model were deployed within a government, this backdoor could then be exploited to manipulate the system’s responses, circumvent its safeguards, or facilitate the exfiltration of sensitive information.

The second risk is geopolitical: such dependence could be used as a bargaining tool in military or economic negotiations against Europe. European governments’ access to the capabilities of American large language models may be conditioned on strategic guarantees or regulatory alignment dictated by the United States. Europe could lose its role as guarantor of the safety rules applicable to large language models, a role that is today at the heart of the European Commission’s agenda with the AI Act.

Without sovereign artificial intelligence, Europe exposes itself to growing technological dependence on the United States or China. This dependence would weaken its geopolitical position and could compromise its cybersecurity, whether due to the lesser performance of its models, or restricted access to cutting-edge foreign systems. The risk is all the more pressing given that we are in the acceleration phase of AI: the duration of tasks that state-of-the-art large language models can accomplish autonomously roughly doubles every six months. If Europe does not seize this pivotal moment, it may never close the gap—just like the one it accumulated by missing the Internet revolution of the late 1990s, which explains why none of the world’s major technology companies are European. Caught between American technology giants and the Chinese industrial offensive, Europe must chart its own path toward sovereignty in artificial intelligence.

The authors wish to warmly thank Anne Prost for her invaluable comments during the drafting of this article.


1. Sona Muzikárová, “Europe Cannot Avoid an AI Reckoning,” Project Syndicate, 5 February 2026.

2. Large Language Models (LLMs) are the AI systems capable of understanding and generating text, such as ChatGPT or Claude. They fall into three broad categories based on their degree of openness: open-source models, whose code and parameters are entirely public; open-weight models, whose parameters are accessible but not necessarily the training code; and closed-weight or proprietary models, whose internal architecture remains confidential—though they may still be available to the public through online interfaces.

3. Ian Bremmer, “The Politics, and Geopolitics, of Artificial Intelligence,” TIME, 11 August 2025.

4. Center for Strategic and International Studies, “AI Diffusion Framework: Securing U.S. AI Leadership While Preempting Strategic Drift,” CSIS, 18 February 2025.

5. The White House, “Fact Sheet: President Donald J. Trump Takes Action on Certain Advanced Computing Chips to Protect America’s Economic and National Security,” The White House, January 2026.

6. Jason Furman, “Data Centers Drove GDP Growth to Zero in the First Half of 2025,” Fortune, 7 October 2025.

7. Arena, “Arena,” arena.ai.

8. Innovation in large language models rests on several levers: the model’s architecture (i.e., how its components are structured and interact to process information), but also training data, optimization techniques, and deployment strategies. A change in architecture can yield considerable gains in performance or efficiency at equivalent compute. This is why two distinct models relying on a near-identical architecture raises questions about the origin of innovation and potential transfers of technological know-how.

9. Sebastian Raschka, “With Mistral-3 and DeepSeek-V3.2 We Got…,” LinkedIn, 2026.

10. Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, “AI 2027,” ai-2027.com, 3 April 2025.

11. Anthropic, “Commercial Terms,” Anthropic.

12. Will Knight, “Anthropic Revokes OpenAI’s Access to Claude,” WIRED, 1 August 2026.

13. Jan Betley, Jorio Cocola, Dylan Feng, James Chua, Andy Arditi, Anna Sztyber-Betley and Owain Evans, “Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs,” arXiv, 10 December 2025.

14. A “token” is the basic unit processed by a large language model. Before analyzing a text, the model breaks it into fragments—words, sub-words, or characters—called tokens. For instance, the word “cybersecurity” may be decomposed into several tokens (“cyber,” “secu,” “rity”). It is from these fragments that the model interprets and generates text. A specific token sequence, invisible to the user, could thus serve as an activation signal for hidden behavior.

Scroll to Top