From advancing cutting-edge AI systems by leveraging expertise in computer science and mathematics to exploring the political and social implications of emerging technologies, researchers at the University of Bath are at the forefront of numerous research projects that are helping to shape AI policy and understand AI’s impacts on society. Our work spans a wide range of topics, including the development of accountable, responsible, and transparent AI, applications of AI in government and the third sector, regulation and governance, ChatGPT and other large language models, generative AI, and machine learning for policy.

AI is increasingly influencing all aspects of our lives, and in this mini blog series, we aim to highlight both established and innovative AI capabilities, their applications, and their implications for society and policy. We hope you find it insightful and engaging. Other blogs in the series can be found here.

For those working in policy, operating at a senior policy level, you may also find our AI Policy Fellowship Programme of interest. 

 

David J. Galbreath is Professor of War and Technology in the Department of Politics, Languages and International Studies and former Dean of the Faculty of Humanities and Social Sciences (2016-2022). His expertise is on defence and war studies with a particular focus on how science and technology influences doctrine, concepts and tactics as well as broader applications such as defence engagement and defence procurement.

 

Militaries have had a longer history with AI then perhaps one would expect, especially as practice and available AI has come into the public consciousness only in the last few years. In fact, militaries have sought to use the automated and novel power of AI going back to the earliest days of the digitisation of warfare during the Cold War. While AI in these earlier periods were primarily aimed at automation rather than machine learning, militaries and the defence industries sought to use computation as a way to overwhelm their adversary.

 

The earliest forms of AI were used in missile guidance systems and early Unmanned Airborne Vehicles (UAVs), while AI applications in sensing and reconnaissance would come at the end of the Cold War and into the 1990s. However, today we can see that AI has continued to infiltrate many different areas of the military – impacting air, navel, and land forces.

 

Predictive AI helps militaries identify missile electronic signatures at a very fast pace and can employ either jamming or counter defences, such as orbital lasers, to destroy the missile. This machine learning approach to missile defence comes at a time where missiles are becoming faster. For instance, Russia’s hypersonic missiles that can go faster than Mach 5 pose a threat to Ukrainian and Western modern defence systems. With the use of faster predictive AI, missile defence systems are better able to counter hypersonic missiles, especially if they are used in exoatmospheric (space) ballistic defence systems.

 

Naval and air forces are establishing AI models that allow for swarming capabilities of UAVs/Unmanned Underwater Vehicles (UUVs). Swarming models have been developed in many modern militaries, though with the US and China taking the lead. The challenge of swarming as a combat capability is especially difficult considering the volatile environments in which they will be deployed. As a result, the navigational AI systems will need to be able to coordinate multi-nodal systems that can cope with the air or water conditions but also coordinate with other military platforms, such as the ship that they are deployed to protect. In 2024, the state-owned Aviation Industry Corporation of China showcased their new ‘swarm carrier’ – which allowed a jet fighter-like UAV to release a swarm of smaller UAVs that could act together as a swarm. China has focused on swarms as a way to potentially overwhelm the power of the US aircraft carrier fleets that could play a role in the defence of Taiwan against an invasion from the mainland.

 

Militaries are also using AI in their battle management systems (BMSs) which are key to the future of Command and Control (C2) systems. As such, militaries are keen to use AI to deliver novel ways of organising to employ the use of armed force. As an example of this AI innovation, my coauthors and I look in the Journal of Strategic Studies at the way that the Australian Army was seeking to use AI in their Semi-Autonomous Combat Team. This project was in fact funded by the Australian Army in order to look at how military modernisation was being shaped by various forces, all of which are present in the military modernisation literature. Of particular interest to us was how the Australian Army was thinking about how BMSs could be used to ‘facilitate better and faster decision-making and greater flexibility in the control of activity, thereby driving up the tempo of operations.’ At its basis, BMSs were being used to manage information. Military operations produce a lot of data and getting a handle on that data, and being able to see what is signal and what is noise is vital for deploying forces in battle.

 

Our experience in looking at the introduction of AI into the C2 systems in the Australian Army show evidence of the promise of AI systems but also illustrate the real challenges of integrating them widely across forces. While AI systems within individual platforms can bring innovation to militaries, there is a major resource challenge to integrating AI systems across the forces, that often require different communications infrastructures with distributed sensors that require a large supply of power and resilient places for data storage. Furthermore, new systems can have a destabilising effect within militaries as traditions and cultural practices may be disrupted or even changed altogether with the introduction of AI systems.

 

The ethics of military use of AI systems is also an important element determining the ability of militaries to use such systems in combat operations. For instance, in 2023 the US sought to regulate the use of AI in targeting decision-making functions in the US military. Directive 3000.09 ‘Autonomy in Weapon Systems’ lays out the appropriate levels of human judgment over the use of force in autonomous and semi-autonomous systems. While the 2023 directive still stands, we will see how this changes in the US under the Trump administration and whether even the advances in AI since 2023 will mean that militaries are even more able to harness the power of AI.

 

Governments are now facing a paradox whereby innovation in AI to make militaries faster and more agile is the very same innovation that adversaries will use for their own military purposes to similar effect. Many advanced industrial states have AI systems but perhaps more alarming is that the theft of military data and algorithms is an important character of the politics of AI. Should a war break out between the US and China, we can assume that AI will be ubiquitous. Which means that any first mover advantage in AI development is quickly diminished. Like nuclear weapons, perhaps we need an international regime of AI applications in war, but no country is willing to cede this advantage regardless of how fleeting.

 

All articles posted on this blog give the views of the author(s), and not the position of the IPR, nor of the University of Bath.

Posted in: AI, Emerging technologies, Evidence and policymaking, Global politics, Law, law enforcement and crime, Security and defence

Respond

  • (we won't publish this)

Write a response