This 2023 John Gee Memorial Lecture will be delivered by Professor Toni Erskine. 

War is changing rapidly – and with it the challenge of ensuring that restraint is exercised in both the resort to force and its conduct. Lethal autonomous weapons systems are able to select and engage targets, with and without human authorisation. Algorithms that rely on big data analytics and machine learning recommend targets for drone strikes and will increasingly infiltrate state-level decision-making on whether to wage war. The spectre of future iterations of these intelligent machines surpassing human capacities, and escaping human control, has recently received a surge in attention as an approaching existential threat. Yet, this future-focused fear obscures a grave and insidious challenge that is already here.

A neglected danger that already-existing AI-enabled weapons and decision-support systems pose is that they change how we (as citizens, soldiers, and states) deliberate, how we act, and how we view ourselves as responsible agents. This has potentially profound ethical, political, and even geo-political implications – well before AI evolves to a point where some fear that it could initiate algorithmic Armageddon. Professor Erskine will argue that our reliance on AI-enabled and automated systems in war threatens to create the perception that we have been displaced as the relevant decision-makers and may therefore abdicate our responsibilities to intelligent machines. She will conclude by asking how these risks might, in turn, affect hard-won international norms of restraint – and how they can be mitigated.

About the speaker

Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at The Australian National University (ANU) and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is also Chief Investigator of the Defence-funded ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ Research Project and a Founding Member and Chief Investigator of the ‘Humanising Machine Intelligence’ Grand Challenge at ANU. She serves as Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP)/Association of Pacific Rim Universities (APRU) ‘AI for the Social Good’ Research Project and in this capacity works closely with government departments in Thailand and Bangladesh. Her research interests include the impact of new technologies (particularly AI) on organised violence; the moral agency and responsibility of formal organisations in world politics; the ethics of war; the responsibility to protect vulnerable populations from mass atrocity crimes (‘R2P’); and the role of joint purposive action and informal coalitions in response to global crises. She is currently completing a book entitled Locating Responsibility: Institutional Moral Agency in a World of Existential Threats and is the recipient of the International Studies Association’s 2024 International Ethics Distinguished Scholar Award.

 

About John Gee

Dr John Gee AO served with distinction as an Australian diplomat in a number of countries. His greatest contribution, however, was in the field of disarmament, where he had a particular interest in chemical weapons. After a period as a Commissioner on the United Nations Special Commission on Iraq following the first Gulf War, he became Deputy Director-General of the Organisation for the Prohibition of Chemical Weapons in The Hague, serving there until 2003. In recognition of his achievements, Dr Gee was made a member of the Order of Australia in January 2007. Gee leaves behind a legacy and a memory of a great Australian.


If you require accessibility accommodations or a visitor Personal Emergency Evacuation Plan please contact the event organiser.

DISCUSSING AI, AUTOMATED SYSTEMS AND THE FUTURE OF WAR SEMINAR SERIES

This seminar series is part of a two-year (2023-2025) research project on Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, generously funded by the Australian Department of Defence and led by Professor Toni Erskine from the Coral Bell School of Asia Pacific Affairs.

How should states balance the benefits and risks of employing artificial intelligence (AI) and machine learning in nuclear command and control systems? Dr Ben Zala will argue that it is only by placing developments in AI against the larger backdrop of the increasing prominence of a much wider set of strategic non-nuclear capabilities that this question can be adequately addressed. In order to do so, he will make the case for disaggregating the different risks that AI poses to stability as well as examine the specific ways in which it may instead be harnessed to restabilise nuclear-armed relationships. Dr Zala will also identify a number of policy areas that ought to be prioritised by way of mitigating the risks and harnessing the opportunities identified in the short-medium term. 
 

About the speaker
Ben Zala is a Fellow in the Department of International Relations, Coral Bell School of Asia Pacific Affairs at ANU. His work focuses on the politics of the great powers and the management of nuclear weapons. He has been a Stanton Nuclear Security Fellow at Harvard University and is currently an Honorary Fellow at the University of Leicester, UK contributing to the Third Nuclear Age project (https://thethirdnuclearage.com/).


About the chair
Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs, Australian National University (ANU), and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is Chief Investigator of the Defence-funded 'Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making' Research Project and a Chief Investigator and Founding Member of the 'Humanising Machine Intelligence' Grand Challenge at ANU.


If you require accessibility accommodations or a visitor Personal Emergency Evacuation Plan please contact bell.marketing@anu.edu.au.

Discussing AI, Automated Systems, and the Future of War Seminar Series

Experts agree that future warfare will be characterized by countries’ use of military technologies enhanced with Artificial Intelligence (AI). These AI-enhanced capabilities are thought to help countries maintain lethal overmatch of adversaries, especially when used in concert with humans. Yet it is unclear what shapes servicemembers’ trust in human-machine teaming, wherein they partner with AI-enhanced military technologies to optimize battlefield performance. In October 2023, Dr Lushenko administered a conjoint survey at the US Army and Naval War Colleges to assess how varying features of AI-enhanced military technologies shape servicemembers’ trust in human-machine teaming. He finds that trust in AI-enhanced military technologies is shaped by a tightly calibrated set of considerations including technical specifications, namely their non-lethal purpose, heightened precision, and human oversight; perceived effectiveness in terms of civilian protection, force protection, and mission accomplishment; and, international oversight. These results provide the first experimental evidence of military attitudes for manned-unmanned teams, which have research, policy, and modernization implications.


About the speaker
Lieutenant Colonel Paul Lushenko,
 PhD is an Assistant Professor and Director of Special Operations at the US Army War College. In addition, he is a Council on Foreign Relations Term Member, Senior Fellow at Cornell University's Tech Policy Institute, Non-Resident Expert at RegulatingAI, and Adjunct Research Lecturer at Charles Sturt University. He is the co-editor of Drones and Global Order: Implications of Remote Warfare for International Society (2022), which is the first book to systematically study the implications of drone warfare on global politics. He is also the co-author of The Legitimacy of Drone Warfare: Evaluating Public Perceptions (2024), which examines public perceptions of the legitimacy of drones and how this affects countries’ policies on and the global governance of drone warfare.

About the chair
Emily Hitchman is the Research Officer on the Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making project. Emily is a PhD scholar at the Strategic and Defence Studies Centre focussing on the history of the Glomar (‘neither confirm nor deny’) response in the national security context. She is also a 2023 Sir Roland Wilson Scholar, and has appeared on the National Security Podcast speaking about her research, and as a panellist at the 2022 Australian Crisis Simulation Summit speaking about the future of intelligence. Emily has worked professionally across the national security and criminal justice public policy space, including in law enforcement and cyber policy, and holds a Bachelor of Philosophy from The Australian National University.

This seminar series is part of a two-year (2023-2025) research project on Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, generously funded by the Australian Department of Defence and led by Professor Toni Erskine from the Coral Bell School of Asia Pacific Affairs.


If you require accessibility accommodations or a visitor Personal Emergency Evacuation Plan please contact bell.marketing@anu.edu.au.