We live in times of high-tech euphoria marked by instances of geopolitical doom-and-gloom. There seems to be no middle ground between the hype surrounding cutting-edge technologies, such as Artificial Intelligence (AI) and their impact on security and defence, and anxieties over their potential destructive consequences. AI, arguably one of the most important and divisive inventions in human history, is now being glorified as the strategic enabler of the 21st century and next domain of military disruption and geopolitical competition. The race in technological innovation, justified by significant economic and security benefits, is widely recognised as likely to make early adopters the next global leaders.

Raluca Csernatoni
Raluca Csernatoni is a visiting researcher at Carnegie Europe, where she works on European security and defense with a specific focus on disruptive technologies.
More >

Technological innovation and defence technologies have always occupied central positions in national defence strategies. This emphasis on techno-solutionism in military affairs is nothing new. Unsurprisingly, Artificial Intelligence is often discussed as a potentially disruptive weapon and likened to prior transformative technologies such as nuclear and cyber, placed in the context of national security. However, this definition is problematic and sets the AI’s parameters as being one-dimensional. In reality, Artificial Intelligence has broad and dual-use applications, more appropriately comparable to enabling technologies such as electricity, or the combustion engine.

Growing competition in deep-tech fields such as AI are undoubtedly affecting the global race for military superiority. Leading players such as the US and China are investing heavily in AI research, accelerating use in defence. In Russia, the US, and China, political and strategic debate over AI revolutionising strategic calculations, military structures, and warfare is now commonplace. In Europe, however, less attention is being paid to the weaponisation of AI and its military application. Despite this, the European Union’s (EU) European Defence Fund (EDF) has nonetheless earmarked between 4% and 8% of its 2021-2027 budget to address disruptive defence technologies and high-risk innovation. The expectation being that such investment would boost Europe’s long-term technological leadership and defence autonomy.

Differing approaches aside, one thing is certain, that in order to avoid the AI ‘arms race’ narrative in policy and political debates, there must be a shift in how AI is framed by international organisations, governments, tech corporations, media, civil society, and academia. This is not to deny the almost universal agreement across policy, military, and tech expert circles that a so-called AI global ‘arms race’ is already well underway, contributing to a sense of urgency among global leaders that more investments must be made in the research and development of disruptive defence applications. Such framings of AI are helping to cultivate a culture of insecurity premised on antiquated Cold War rhetoric, further normalising the ‘arms race’ narrative in respect to AI-enabled weapons systems. This bellicose discourse should not drive the global competition in AI. In this regard, the EU could play a significant role in shifting the debate by putting forward policy and research frameworks and initiatives for the responsible design and governance of AI in Europe and the world, thus potentially mitigating great power competition.

While the EU has begun to take note of AI’s disruptive potential, it arguably lags behind in research and innovation when compared to American, Chinese and, to a lesser extent, Russian counterparts. Both the US and China have developed wide-ranging roadmaps for global AI leadership, are the most active countries in AI research and development, and have concentrated the highest levels of external and internal investments in AI. Equally, the Russian government has been actively leveraging its existing academic and industrial resources for AI-enabled weapons, but still falls behind Western and Chinese public-private harnessing efforts. Conversely, recent policy and funding initiatives at the EU level are shaping a distinctive European governance approach to tackle such challenges: both through increased cross-border financing opportunities to address research and development gaps at a European level; and through preventive governance mechanisms for AI’s responsible technological design and uses.

Overall, concrete and decisive actions have been taken at the EU-level by promoting policy initiatives and projects, creating specialised expert groups, providing financing platforms for industry consortia, and fostering public-private partnerships in high-tech areas. In terms of financing opportunities, for example between 2018 and 2020, funding has been dedicated to AI-related initiatives, with €1.5 billion under Horizon 2020, topped by €20 billion of combined public and private investment. In June 2019, the EU’s top experts on AI presented a set of recommendations on how to boost the European AI industry, putting forward a detailed plan and vision on how the EU should ‘catch up’ with frontrunners in the so-called race for AI supremacy. From this perspective, the EU’s own framing of the global AI ‘race’ ensures that future advances in this domain are made on the EU’s terms, and according to EU values, fairness standards, and regulatory conditions for the benefit of European citizens.

European industry recommendations follow the April 2019 release of guidelines for the ethical development and uses of ‘Trustworthy AI’, written by the same experts, that maps governance conditions under which AI should be developed and applied in the EU. This preventive governance approach prioritises lawful AI and respecting applicable laws and regulations; ethical AI and respecting ethical principles and values; and technically robust AI that takes into account its social environment. The EU, and particularly the European Commission, appears to be a key driver and agenda-setter in galvanizing a comprehensive and more human-centred approach to the R&D of Artificial Intelligence across Europe. Such an approach is substantiated on technologically robust and trustworthy European AI technologies that respect basic human rights, human agency, and data privacy. These are characterised by transparency, diversity, and fairness, and engineered to mitigate potential harm, allow accountability and oversight, ensuring social and environmental well-being.

In contrast to global competitors, the EU has put forward a unique and proactive entrepreneurship approach in harnessing its market and regulatory power to define accepted standards of research, innovation, and usability for AI. Grounding R&D of new and emerging security technologies in ethics may be one way to ensure it does not exacerbate a ‘global AI arms race’ or only benefit privileged groups or states. The future of European AI or ‘made-in-Europe’ AI is being written, and the EU could indeed become a leader in ethical AI, setting the stage for global standards.

However, if the EU seriously envisions itself establishing a human-centric and value-based global governance of AI, as well as galvanising a common AI effort in Europe, it should focus more on consolidating its agenda-setting power both among its member states, and in the wider world. The aim should be to avoid empty rhetoric and over-regulation that could impede innovation and commercialisation. A balance needs striking between preventive research and design measures in and developing international regulatory guidelines that evolve alongside technological development and implementation of AI in various fields, including the military. In short, this means creating smart regulation for smart AI technologies as part of an ongoing process adapting to fast-paced technological developments and their security and defence applications. Additionally, the concept of ‘arms race’ is too crude a frame and understanding for the impending AI revolution.

With a ‘Trustworthy’ AI branding, the EU may indeed have a unique selling proposition to distinguish itself from competitors. This approach, if further clarified, deepened, and implemented, could provide the EU with a much-needed competitive advantage for European home-grown AI products and services. Inspiring more confidence in consumers and providing a roadmap for their regulation, especially in sensitive policy domains such as security and defence. From this perspective, the EU’s strategic advantage resides in its market and regulatory power, as demonstrated by the recent General Data Protection Regulation (GDPR): setting industry standards, building trust, and ensuring legal clarity and public legitimacy in its AI-based applications. The EU could also contribute to a landscape in which European citizens understand and are involved in ongoing debates on ethical AI and privacy, in particular when related to data access, the prevention of data silos, and the AI’s applications in security and defence.

Problematically, the ethical AI framing put forward by the EU is yet another policy narrative, ostensibly already being captured by private and corporate interests as another catchy branding scheme. This could impede the real goal of formalising ethical principles in hard law, and securing the robust international and European regulatory governance of Artificial Intelligence. The reality is it is unlikely that global powers will constrain their pursuit of AI-enabled weapons systems whilst they believe them to deliver strategic advantage. As long as the EU’s AI leadership is limited to providing ethical guidelines and not leading in its funding, research, and legislation, it runs the risk of providing empty statements, while others continue the pursuit of the next shiny, deadly, AI-enabled weapon.

This article was originally published by the European Leadership Network.