Source: Getty

How the EU Can Navigate the Geopolitics of AI

State and corporate competition threatens responsible AI practices and safety regulations. The EU will have to navigate these rivalries while staying committed to a values-based AI governance.

Published on January 30, 2024

This publication is part of EU Cyber Direct – EU Cyber Diplomacy Initiative’s New Tech in Review, a collection of commentaries that highlights key issues at the intersection of emerging technologies, cybersecurity, defense, and norms.

The rapid evolution of artificial intelligence (AI) technologies over the past year has set the stage for intense global competition in capabilities and governance frameworks.

The advent of consumer-facing AI, epitomized by the meteoric rise of ChatGPT, a large generative language model (LLM) launched by OpenAI in November 2022, has made the workings of AI systems and the advent of artificial general intelligence (AGI) a major focus for leaders, regulators, industry, and the public alike.

This heightened visibility has sparked policy and regulatory concerns about safety, human rights, responsible innovation, and alignment—to make AGI aligned with human values and follow human intent. The inherently opaque and black-boxed nature of the technology as well as the limited explainabilityof outcomes are particularly unsettling. This year will see billions of people go to the polls worldwide, which, in the era of post-truth and generative AI, casts doubt on the legitimacy of election results by amplifying misinformation and disinformation, thereby impacting the democratic exercise. The potential weaponization of AI technologies for security and defense is another worrying development.

But generative AI is only the beginning when it comes to the technology’s potential to impact international relations as well as geopolitical and corporate power. The geopolitics of AI, predominantly played out between the United States, China, and the EU, extends beyond rapid technological advancements, shaping global norms, technological standards, and even ideological narratives. The corporate hegemony of frontier AI companies like Google, OpenAI, Microsoft, and Anthropic—and the concentration of compute power among such tech giants—also raises concerns about critical infrastructure control, data commodification, and AI talent monopolization. As civilian AI innovation becomes intertwined with national security strategies, the implications for global influence and security are profound.

On the international stage, the United States and China engage in a high-stakes race for AI supremacy, leveraging technological innovation for economic dominance and military prowess. While the United States relies on tech giants and cutting-edge research institutions, China’s state-backed initiatives and vast data resources position it as a formidable contender. Both governments view AI as integral to national security, prompting concerns about a so-called AI arms race between great powers.

Yet, the pressing question is not who is winning the AI race but the risks associated with this perceived geopolitical competition. Fears of falling behind in the case of both state and corporate players may irreversibly compromise responsible AI practices and safety regulations. The EU, while lacking the scale of tech giants, emphasizes responsible governance, aligning its approach with democratic values and trustworthy AI ethical principles. But does it really have the influence to shape a global AI governance order?

The surge in AI innovation has prompted a parallel race in crafting regulatory frameworks. The absence of a comprehensive global governance structure has led to a proliferation of international, European, and national initiatives, non-binding principles and norms, and voluntary corporate codes of conduct, forming a complex regulatory and governance landscape. The EU’s landmark AI Act aims to set a precedent for a binding hard regulation of AI, reflecting a commitment to human-centric, trustworthy, and risk-based regulation.

While the EU has had a head start and first-mover advantage in setting the global agenda with the AI Act as a blueprint for other governments, the current race to govern and regulate AI highlights an increasingly crowded AI governance landscape.

This will be difficult for the EU to navigate, as it cannot solely rely on the “Brussels effect” and extraterritorial regulations to influence international standards. The union’s unique governance model and emphasis on democratic values may clash with the diverse regulatory and innovation approaches of other regions. Further challenges facing the EU include coordinating actions across its institutions and member states while crafting a cohesive European AI foreign policy approach, both in terms of countering U.S.-driven innovation dominance and China’s quest to become an AI superpower. Not to mention growing AI nationalism, protectionist tendencies, and fragmentation risks within the bloc.

To address the AI innovation lag in Europe, on January 24, 2024, the European Commission unveiled a comprehensive AI innovation package—an important move toward fostering a dynamic and robust AI ecosystem in Europe. To further boost the leadership of European start-ups and cultivate competitive AI ecosystems across the union, the Commission plans to establish what it terms “AI Factories,” comprising AI-dedicated supercomputers, interconnected data centers, and a skilled workforce ranging from supercomputing and AI experts to data specialists, researchers, and start-ups.

These measures, in the wake of the political consensus achieved in December 2023 on the AI Act, are explicitly designed to propel the creation, implementation, and adoption of trustworthy AI within the EU. Yet, the proof is in the proverbial pudding when it comes to such initiatives. The crucial test lies in translating these commitments into tangible actions, particularly in terms of fostering a vibrant and globally competitive cross-border AI innovation (start-up) ecosystem in the EU.

Establishing an AI Office within the Commission could help ensure a more streamlined development and coordination of AI policy at the European level, as well as supervise the implementation and enforcement of the AI Act.

The EU faces the challenge of managing the geopolitics of AI governance in a landscape characterized by state and corporate competition, as well as an emerging global regime of complex regulatory frameworks. While the EU’s commitment to responsible AI is commendable, building a harmonized European AI foreign policy approach, fostering strategic alliances with key partners, effectively operationalizing the AI Act, and navigating diverse governance initiatives will be crucial for shaping the future of AI on a global scale.

This publication has been produced in the context of the EU Cyber Direct – EU Cyber Diplomacy Initiative project with the financial assistance of the European Union. The contents of this document are the sole responsibility of the author and can under no circumstances be regarded as reflecting the position of the European Union or any other institution.