Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU).
This two-year research project (2023 – 2025) is generously funded by an Australian Department of Defence Strategic Policy Grant.
The use of artificial intelligence (AI), machine learning, and automated systems has already changed the nature of the battlefield. The further diffusion of AI-enabled systems into states’ resort-to-force decision making is unavoidable for Australia, its allies, and its adversaries. In the United States, for example, machine learning techniques are already used in some intelligence analyses, which, in turn, contribute to decisions of whether and when to use force. While this contribution is currently limited and indirect, trends in other realms suggest that the use of AI-driven systems will increase in this high-stakes area. Separately, there is potential for AI-enabled automated systems to initiate escalatory defensive action in contexts such as the cyber realm. If we begin to consider the possible future effects of using these technologies in resort-to-force decision-making processes now, we can develop policy to guide their development and use, promote necessary education and training, and, ultimately, mitigate risks.
This two-year research project will have an important international collaborative dimension, which will include:
• two International Workshops on Anticipating the Future of War: AI, Automated Systems and Use-of-Force Decision Making, to be co-convened by Professor Steven E. Miller (Belfer Center, Harvard University) and Professor Toni Erskine (Coral Bell School of Asia Pacific Affairs, ANU) and held at the Australian National University (ANU) in Canberra, Australia in June 2023 and July 2024;
• a Policy Roundtable, also to be held at the ANU in Canberra in June/July 2024.
An international and interdisciplinary group of leading scholars – with backgrounds in political science, international relations (IR), mathematics, psychology, sociology, computer science, and philosophy – and military and AI/machine learning practitioners will participate in these activities and explore the risks and opportunities of introducing AI, machine learning, and automated systems into state-level use-of-force decision making. (Please see 'Project Participants.')
In addition to producing a series of published outputs, the project will be supplemented by an ‘AI, Automated Systems, and the Future of War' Public Lecture Series and a 'Discussing AI, Automated Systems, and the Future of War' Seminar Series. (For details of these two series, please see 'Forthcoming Events' and 'Past News & Events'. The papers from the June 2023 workshop will be published in the Australian Journal of International Affairs in early 2024.)
This research project will analyse emerging and disruptive technologies in the form of AI-enabled and automated systems used both to inform state-level decision making on the resort-to-force and, in some contexts – such as defence against cyberattacks – to make and directly implement decisions on the resort-to-force. In the former case, human decision makers draw on algorithmic recommendations and predictions to reach resort-to-force decisions; in the latter case, decisions are reached with or without human oversight. Both entail future-focused, but foreseeable, developments, which challenge existing rules and norms surrounding states' decisions to engage in organised violence and warrant immediate consideration.
Machine learning techniques enhance our decision-making capacities by analysing huge quantities of data quickly, predicting outcomes, calculating opportunities and risks, and uncovering patterns of correlation in datasets that are beyond human cognition. The potential benefits of using AI-enabled systems are clear in scenarios where predictive analyses of key strategic variables – such as anticipated threat, risk of inaction, proportionality of a potential response, and mission cost – are fundamental.
Yet there are also complications that would accompany reliance on these systems. This project will focus on the following:
When programmed to calculate – or automatically implement – a response to a particular set of circumstances, intelligent machines will behave differently than human agents.
This difference complicates understandings of deterrence. Current perceptions of a state’s willingness to use force in response to aggression are based on assumptions of human judgement (and forbearance) rather than automated calculations. The use of automated systems – which would make and implement decisions at speeds impossible for human actors – could result in unintended escalations in the use of force.
Empirical studies show that individuals and teams relying on AI-driven systems often experience ‘automation bias’ – the tendency to accept without question computer-generated outputs. This tendency can make human decision-makers less likely to use their own expertise and judgment to test the machine-generated recommendations.
Unintended consequences include acceptance of error, the de-skilling of human actors, and decreased compliance with international rules and norms of restraint in the use of force.
Machine learning processes are frequently opaque and unpredictable. Those who are guided by them often do not understand how predictions and recommendations are reached, and do not grasp their limitations. The current lack of transparency in much AI-driven decision making – ‘algorithmic opacity’ – has led to negative consequences across a range of contexts.
As governments’ democratic – and international – legitimacy requires compelling and accessible justifications for decisions to use force, algorithmic opacity poses grave concerns.
Studies in both International Relations (IR) and organisational theory reveal the existing complexities and pathologies of organisational decision making. AI-driven decision-support and automated systems intervening in these complex structures risks exacerbating these problems.
Without carefully developed guidelines, AI-enabled systems at the national level could distort and disrupt strategic and operational decision-making processes and chains of command.
These complications – and their potential implications for Australia’s defence policy – warrant serious attention. This project will bring together new voices and diverse perspectives – in the form of an international group of practitioners and multidisciplinary, world-leading scholars – to contribute to a comprehensive study of the risks and opportunities of introducing AI-enabled systems to state-level decisions to engage in war across these four thematic areas.
This project will initiate a much-needed, research-led discussion on the effects of AI-enabled systems in state-level resort-to-force decision making. It also seeks to significantly extend the public Australian strategic policy debate on the impacts of disruptive and emerging technologies.
28 November 2023 - Project CI Professor Toni Erskine will deliver the 2024 John Gee Memorial Lecture on ‘Before Algorithmic Armageddon: The Immediate Risks of AI in War’.
In this lecture, Professor Erskine (ANU) will argue that we are overlooking grave risks that accompany our increasing reliance on artificial intelligence (AI) in war. A neglected danger that AI-enabled weapons and decision-support systems pose is that they change how we (as citizens, soldiers, and states) deliberate, how we act, and how we view ourselves as responsible agents. This could have profound ethical, political, and even geo-political implications – well before AI evolves to a point where some fear that it could initiate algorithmic Armageddon.
4 December 2023 - The third seminar in the ‘Discussing AI, Automated Systems, and the Future of War Seminar Series’ will be given by Dr Benjamin Zala (ANU) on ‘Should AI Stay or Should AI Go? First Strike Incentives & Deterrence Stability in the Third Nuclear Age’.
How should states balance the benefits and risks of employing artificial intelligence (AI) and machine learning in nuclear command and control systems? In this seminar, Dr Zala will argue that it is only by placing developments in AI against the larger backdrop of the increasing prominence of a much wider set of strategic non-nuclear capabilities that this question can be adequately addressed. In order to do so, he will make the case for disaggregating the different risks that AI poses to stability as well as examine the specific ways in which it may instead be harnessed to restabilise nuclear-armed relationships. Dr Zala will also identify a number of policy areas that ought to be prioritised by way of mitigating the risks and harnessing the opportunities identified in the short-medium term.
3 April 2024 – The International Studies Association (ISA) annual convention in San Francisco, California will include a Roundtable on ‘Anticipating the Future of War: AI, Automation, and Resort-to-Force Decision Making’. This session will include eight participants from the 2023 workshop held in Canberra, who will share the insights from their workshop papers (which have been revised for publication in the April 2023 issue of the Australian Journal of International Affairs).
Roundtable participants: Professor Toni Erskine - Chair (ANU); Dr Justin Canfil - Discussant (Carnegie Mellon); Professor Steven E. Miller (Harvard); Dr Mitja Sienknecht (European New School of Digital Studies); Professor Marcus Holmes (William & Mary); Dr Neil Renic (Copenhagen); Professor Denise Garcia (Northeastern); Dr Sarah Logan (ANU).
The roundtable will be held from 4pm to 5:45pm on 3 April 2024 at 32-Franciscan C, Ballroom Level Hilton San Francisco Union Square.
23 - 25 July 2024 - The 2024 'Anticipating the Future of War: AI, Automated Systems and Resort-to-Force Decision Making' Workshop and Policy Roundtable, will be co-convened by Professor Toni Erskine (ANU) and Professor Steven Miller (Harvard), and held on the ANU campus.
The first two days will consist of individual paper presentations by an outstanding group of international scholars. The final day will entail a ‘policy roundtable’ with representatives from Australian government departments to discuss the one-page policy recommendations that participants will circulate based on their research papers.
For further information on the workshop and policy roundtable, please contact the project’s Research Officer, Emily Hitchman
27 September 2023 - We were fortunate to have Dr Miah Hammond-Errey, Director of the Emerging Technology Program at the United States Studies Centre, University of Sydney, present the second seminar in the 'Discussing AI, Automated Systems, and the Future of War' seminar series. She spoke about her fascinating research on 'The Impact of Big Data and Emerging Technologies on National Security, Intelligence Production, and Decision Making' to a very full Mills Room in the Chancelry Building, followed by a great discussion. Click here for the seminar flyer, speaker bio., and full abstract.
29 - 30 June 2023 - A two-day workshop, 'Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making', was co-convened by Professor Toni Erskine (ANU) and Professor Steven E. Miller (Harvard) at the Australian National University in Canberra. Eighteen leading scholars with backgrounds in computer science, AI and machine learning, mathematics, sociology, philosophy, political science, psychology, and international relations presented and discussed sixteen original papers. Each paper explored one of the four 'complications' (outlined in the project description above) of AI-enabled automated systems and decision-support systems being used in the context of state-level resort-to-force decision making. Click here to view the workshop program.
28 June 2023 - In an outstanding inaugural lecture for the new Public Lecture Series, 'AI, Automated Systems, and the Future of War', Professor Sarah Kreps, John L. Wetherill Professor of Government and Director of the Tech Policy at Cornell University, spoke to a full lecture theatre on ‘Weaponizing ChatGPT? National Security and the Perils of AI-Generated Texts in Democratic Societies’. In this talk, Professor Kreps presented a range of original experimental evidence and offered guidance on how to harness the prospects, and guard against the perils, of generative AI. Click here for lecture flyer and full abstract.
27 June 2023 - In the first seminar of the new Seminar Series, 'Discussing AI, Automated Systems, and the Future of War', Retired Major General Mick Ryan gave an illuminating and insightful talk on ‘Thinking About Future War: Drones, Mass, and Other Trends’. He explored key military trends that have underpinned the conduct of the Ukraine and Russia war, and how the explosive change in key trends such as human-machine teaming, strategic influence, and mass warfare will drive further changes in the character of 21st century competition and conflict. Click here for the seminar flyer and full abstract.
6 March 2023 - The Keynote Address at the Chancellor’s International Women’s Day Lunch, University of South Australia, was delivered by Project CI Professor Toni Erskine on ‘AI and the Risk of Misplaced Responsibility in War’. She addressed the risks that accompany our propensity to attribute to intelligent artefacts – including AI-enabled weapons and decision-support systems – capacities that they do not have and assume that they are able to bear moral responsibilities of restraint in war.
She currently serves as Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific/APRU ‘AI for the Social Good’ Research Project and in this capacity works closely with government departments in Thailand and Bangladesh. She is also a Chief Investigator and Founding Member of the ANU ‘Humanising Machine Intelligence’ Grand Challenge Research Project. Her research interests include the moral agency and responsibility of formal organisations in world politics; the ethics of war; the responsibility to protect (R2P); joint purposive action and informal coalitions; and the impact of new technologies on organised violence.
Previously, he was Senior Research Fellow at the Stockholm International Peace Research Institute (SIPRI) and taught Defense and Arms Control Studies in the Department of Political Science at the Massachusetts Institute of Technology. He is editor or co-editor of more than two dozen books, including, most recently, The Next Great War? The Roots of World War I and the Risk of U.S.-China Conflict. Professor Miller is a Fellow of the American Academy of Arts and Sciences, where he is a member of their Committee on International Security Studies (CISS). He currently leads the Academy’s project on Promoting Dialogue on Arms Control and Disarmament. He is also co-chair of the U.S. Pugwash Committee and a member of the Council of International Pugwash.
She is also a 2023 Sir Roland Wilson Scholar, and has appeared on the National Security Podcast speaking about her research, and as a panellist at the 2022 Australian Crisis Simulation Summit speaking about the future of intelligence. Emily has worked professionally across the national security and criminal justice public policy space, including in law enforcement and cyber policy, and holds a Bachelor of Philosophy from The Australian National University.
Bianca’s current research is on the sociopolitical and ethical impacts of autonomy and AI-enabled technologies in military and security contexts. She is examining the role of trust discourse in shaping debates about ethical military AI (arguing that machine learning algorithms naturally agitate rules- and standards-based orders, thereby challenging the possibility of trust), the changing status of soldiers’ labour in response to increasing autonomy, and the social meaning of technology demonstrations as it relates to communicating the ethical and legal potential of AI-enabled systems. Her forthcoming monograph, Governing Military Sacrifice, is one of the first books to connect the rise of drones and combat unmanning with military and security privatization and includes original interview data from both drone advocates and critics alike. Bianca holds a PhD (2018) from York University in Toronto, an MA in sociology from Simon Fraser University, and a BA in political science from Simon Fraser University. From 2019 to 2021, she was a Researcher at UNSW at the Australian Defence Force Academy.
From 2024-2025, he will take leave to complete a Stanton Nuclear Security Fellowship at the Council on Foreign Relations. Dr. Canfil's research interests concern the impact of emerging technologies on international law and arms control, both past and present. His research has appeared in outlets such as the Journal of Cybersecurity and the Oxford Handbook on AI Governance. He received a Fulbright Scholarship to conduct doctoral research in China and a PhD in Political Science from Columbia University. He can be reached at [www.jcanfil.com](http://www.jcanfil.com/) or on Twitter @jcanfil.
His research looks at the ethical issues arising in all types of mathematical work, including AI, blockchain, finance, modelling, surveillance, cryptography, and statistics. He has been running a seminar series on ethics in mathematics for the past 7 years as part of the Cambridge University Ethics in Mathematics Society, and sat on the ethics advisory group of Machine Intelligence Garage UK for 3 years. He is about to release "A short guide to responsible mathematical work", and is currently writing a monograph "Ethics for the working mathematician", both of which are the first works of their kind. Maurice comes from a background in research mathematics, holding two PhDs in mathematics from the University of Cambridge and the University of Melbourne on problems in algebra and computability theory. He has over 20 years experience studying, working, and teaching, in mathematics departments around the world.
Her work intersects technology studies and social psychology, with a current focus on AI and machine learning. She is Deputy Lead of the Humanising Machine Intelligence Program at the ANU, Co-Director of the ANU Role-Taking Lab, on the board for Theorizing the Web, Past Chair of the Communication, Information Technologies, and Media Sociology section of the American Sociological Association, and author of How Artifacts Afford: The Power and Politics of Everyday Things, published with MIT Press.
She writes about the use of force, executive power, secret treaties, and the intersection of national security and AI, and she is the co-author of a leading casebook on foreign relations law. She is an elected member of the American Law Institute, a member of the State Department’s Advisory Committee on International Law, and a contributing editor to the Lawfare blog. She recently served as Special Assistant to the President, Associate White House Counsel, and Deputy Legal Advisor to the U.S. National Security Council. Before joining UVA, she served for ten years in the U.S. State Department’s Office of the Legal Adviser, including as the embassy legal adviser at the U.S. Embassy in Baghdad during Iraq’s constitutional negotiations. Professor Deeks received her J.D. with honors from the University of Chicago Law School, where she was elected to the Order of the Coif and served as an editor on the Law Review. After graduation, she clerked for Judge Edward R. Becker of the U.S. Court of Appeals for the Third Circuit.
He is also co-director of the Social Science Research Methods Center and director of the Political Psychology and International Relations lab, both at William & Mary. He is Principal Investigator on a US Department of State grant exploring the effect of people-to-people exchanges in US-Japan relations through the lens of baseball diplomacy over the last 150 years. His research explores various aspect of diplomacy, including dynamics of interpersonal relationship in international politics. He is the author of Face-to-Face Diplomacy: Social Neuroscience in International Relations, an award-winning book with Cambridge University Press. Holmes also is co-editor, with Corneliu Bjola, of a seminal volume on the use of social media in achieving diplomatic ends, Digital Diplomacy: Theory and Practice, with Routledge. His latest book project, Personal Chemistry, is co-authored with Nicholas J. Wheeler and is under contract at Oxford University Press.
Her work lies at the intersection of technology, politics, and national security, and is the subject of five books and a range of publications published in academic journals such as the New England Journal of Medicine, Science Advances, Vaccine, Journal of the American Medical Association (JAMA) Network Open, American Political Science Review, and Journal of Cybersecurity, policy journals such as Foreign Affairs, and media outlets such as CNN, the BBC, New York Times, and Washington Post. She has a BA from Harvard University, MSc from Oxford, and PhD from Georgetown. Between 1999-2003, she served as an active duty officer in the United States Air Force.
Her research interests include the future of open source intelligence; the governance of international data transfers; the development of global privacy norms; and the geopolitics of global technology standards. Her work has been funded by the Annenberg School for Communication, the Australian government, and the United Nations Economic and Social Commission for Asia and the Pacific/Association of Pacific Rim Universities. Her first book, Hold Your Friends Close: Countering Radicalization in Britain and America, was published by Oxford University Press in 2022.
Recurring themes in his work include algorithmic equity, modelling for decision support, and modelling the behaviours of social agents. Dr Osoba is currently a senior AI engineer on Fairness at LinkedIn where he works on helping enable the platform’s use of AI/ML in a responsible and trustworthy manner. Prior to LinkedIn, Osoba was a senior information scientist at the RAND Corporation and a professor of public policy at the Pardee RAND Graduate School. His policy research portfolio at RAND focused on AI/ML applied to problems in social and economic well-being and national security. At the Pardee RAND Graduate School, Osoba was the Associate Director for Tech & Narrative Lab, helping to pioneer a novel program in training the next generation of effective and creative tech policy thought leaders. Dr Osoba earned his B.Sc. in Electrical and Computer Engineering from the University of Rochester and his Ph.D. in Electrical Engineering from USC.
He was inaugural President of the Defence Entrepreneurs Forum (Australia) and is a member of the Military Writers Guild. He is a keen author on the interface of military strategy, innovation, and advanced technologies, as well as how institutions can develop their intellectual edge. In February 2022, Mick retired from the Australia Army. In the same month, his book War Transformed was published by USNI Books. He is an adjunct fellow at the Center for Strategic and International Studies in Washington DC, and a non-resident fellow of the Lowy Institute in Sydney. In January 2023 Mick was also appointed as an Adjunct Professor at the University of Queensland in Brisbane, Australia.
Renic’s current work evaluates the practical and moral challenges of emerging and evolving technologies such as drone and autonomous violence. He is the author of “Asymmetric Killing: Risk Avoidance, Just War, and the Warrior Ethos” (Oxford University Press 2020) and has published extensively in academic journals, including The European Journal of International Relations, Survival, Ethics and International Affairs, and the Bulletin of the Atomic Scientists.
Her paper on the debordering of intrastate conflicts based on her PhD received the best paper award in International Relations (IR) of the IR-section of the German Political Science Association. Her research interests include the (digital) transformation of violence and conflicts; border- and boundary studies; the responsibility of state and non-state actors in world politics; and inter- and intra-organizational decision-making in security contexts. Her work is situated at the intersection of IR, peace and conflict studies and science & technological studies (STS). In her current research, Mitja analyzes the impact of digitalization on armed conflicts and collaborates in developing and training an AI to identify argumentative structures in IR theories.
Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.
He is a non-resident Senior Fellow at BASIC (the London based NGO that works on international trust-building and nuclear diplomacy) where is he the academic lead on the BASIC-ICCS Nuclear Responsibilities Programme. His publications include (with Ken Booth) The Security Dilemma: Fear, Cooperation, and Trust in World Politics (Palgrave Macmillan, 2008); Saving Strangers: Humanitarian Intervention in International Society (Oxford: Oxford University Press, 2000); and Trusting Enemies: Interpersonal Relationships in International Conflict (Oxford University Press 2018). He is a Fellow of the Academy of Social Sciences in the United Kingdom, a Fellow of the Learned Society of Wales, and has had an entry in Who’s Who since 2011.
His work has appeared in over a dozen different peer-reviewed journals such as Review of International Studies, Journal of Global Security Studies, Third World Quarterly and the Bulletin of the Atomic Scientists. His book Power in International Society: A Perceptual Approach to Great Power Politics is under contract with Oxford University Press and his edited volume, National Perspectives on a Multipolar Order, was published by Manchester University Press in 2021. He has been a Stanton Nuclear Security Fellow in the Belfer Center for Science & International Affairs at Harvard University and has previously held positions in the UK at the Oxford Research Group, Chatham House, and the University of Leicester where he is also currently an Honorary Fellow working with the European Research Council-funded, Third Nuclear Age project (https://thethirdnuclearage.com/).
She received the Distinguished Scholar award from the International Studies Association in 2018 and was a co-winner of the 2003 American Political Science Association Jervis and Schroeder Award for best book in International History and Politics for her book Argument and Change in World Politics: Ethics, Decolonization, Humanitarian Intervention (CUP, 2002). Professor Crawford’s most recent publication is The Pentagon, Climate Change, and War (MIT Press, 2022). She is also working on To Make Heaven Weep: Civilians and the American Way of War. She has also authored several other books including, Accountability for Killing: Moral Responsibility for Collateral Damage in America’s Post‑9/11 Wars (2013). She is a co-founder and co-director of the ‘Costs of War Project’, based at Brown University.
Director of the Defence Intelligence Organisation, and Head of the National Assessments Staff in the National Intelligence Committee.
He is the author of 5 books and 4 reports to government, as well as more than 150 academic articles and monographs about the security of the Asia-Pacific region, the US alliance, and Australia’s defence policy. He wrote the 1986 Review of Australia’s Defence Capabilities (the Dibb Report) and was the primary author of the 1987 Defence White Paper.
She develops legal and philosophical theories about how international law can be an instrument of morality in war, albeit an imperfect one. She also studies how normative considerations shape public opinion on the use of force and the attitudes of conflict-affected populations. In 2021, she won a Philip Leverhulme Prize for work on the moral psychology of war. She currently co-convenes (with Scott Sagan) a research project on the "Law and Ethics of Nuclear Deterrence," which is part of the Research Network on Rethinking Nuclear Deterrence, funded by the MacArthur Foundation.
He is a Fellow of the Academy of Social Sciences in Australia and former editor of the Australian Journal of Political Science. Working in both political theory and empirical social science, he is best known for his contributions in the areas of democratic theory and practice and environmental politics. One of the instigators of the ‘deliberative turn’ in thinking about democracy, he has published eight books in this area with Oxford University Press, Cambridge University Press, and Polity Press. His work in environmental politics and climate governance has yielded seven books with Oxford University Press, Cambridge University Press, and Basil Blackwell. He has also worked on comparative studies of democratization, post-positivist public policy analysis, and the history and philosophy of social science. His current research emphasizes global justice, governance in the Anthropocene, and confronting contemporary challenges to democracy.
She is a board member of the Japan Deep Learning Association (JDLA). She is also a member of the Council for Social Principles of Human-centric AI, The Cabinet Office, which released “Social Principles of Human-Centric AI” in 2019. Internationally, she is an expert member of the working group on the Future of Work, GPAI (Global Partnership on AI). She obtained a Ph.D. from the University of Tokyo and previously held a position as Assistant Professor at the Hakubi Center for Advanced Research, Kyoto University. She was named the University of Tokyo Excellent Young Researcher, 2021.
Institute (Tokyo) and the Institute for Economics and Peace (Sydney), Vice-chair of the International Committee for Robot Arms Control, and member of the Institute of Electrical and Electronics Engineers Global Initiative on Ethics of Autonomous and Intelligent Systems. She was the Nobel Peace Institute Fellow in Oslo in 2017. A multiple teaching award-winner, her recent publications appeared in Nature, Foreign Affairs, International Relations, and other top journals. Her upcoming book is Common Good Governance in the Age of Military Artificial Intelligence with Oxford University Press 2023.
She is also an Adjunct Professor at the Fletcher School of Law and Diplomacy, where she teaches a graduate seminar on the role of nuclear weapons in the 21st century and a core course on Technology, Public Policy, and National Security. She is the leading faculty of the Fletcher School Executive Education course on “Negotiating Technology Agreements in Emerging Markets: Developing Strategic capacities for accessing transformative technologies”. Dr Giovannini served as a Senior Strategy and Policy Officer to the Executive Secretary of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). Before her international appointment, she served five years at the American Academy of Arts and Sciences in Boston as the Director of the Research Program on Global Security and International Affairs. With a Doctorate from the University of Oxford, Dr Giovannini began her career working for international organisations. She has published widely in Nature, the Bulletin of Atomic Scientists, Arms Control Today, the National Interest, and The Washington Post, among others.
His published work examines the development of the just war tradition over time and the role it plays in circumscribing contemporary debates about the rights and wrongs of warfare. These themes are reflected in his two monographs: Victory: The Triumph and Tragedy of Just War (Oxford: Oxford University Press, 2019) and The Renegotiation of the Just War Tradition (New York: Palgrave, 2008). He has also co-edited three volumes and his work has been published in leading journals in the field, including International Studies Quarterly, the European Journal of International Relations, the Journal of Strategic Studies, the Journal of Global Security Studies, Review of International Studies, Ethics & International Affairs, and Millennium. Cian is a co-editor of the Review of International Studies.ory Board.
diagnosis, and search, their integration with optimisation, machine learning, and verification, as well as their applications to energy and transport. Her recent work, which has received multiple academic and industry awards, focuses on handling constraints in planning under uncertainty, on integrating deep learning with search to solve planning problems, and on coordinating distributed energy resources to benefit their owners, the distribution grid, and energy markets. Sylvie is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and a co-Editor in Chief of the Artificial Intelligence journal. She is a former Councilor of AAAI, co-Chair and President of the International Conference on Automated Planning and Scheduling (ICAPS), and Director of the Canberra Laboratories of NICTA, home to 150 researchers and PhD students.
He has been a Resident Associate of the Carnegie Endowment for Peace and a Guest Scholar at the Brookings Institution, and he has also served as a consultant for the Institute of Defense Analyses, the Center for Naval Analyses, and the National Defense University. He presently serves on the editorial boards of Foreign Policy, Security Studies, International Theory, International Relations, and Journal of Cold War Studies, and he also serves as Co-Editor of the Cornell Studies in Security Affairs, published by Cornell University Press. Additionally, he was elected as a Fellow in the American Academy of Arts and Sciences in May 2005. His book The Israel Lobby and U.S. Foreign Policy (Farrar, Straus & Giroux, 2007, co-authored with John J. Mearsheimer) was a New York Times best seller and has been translated into more than twenty foreign languages. His most recent book is The Hell of Good Intentions: America’s Foreign Policy Elite and the Decline of U.S. Primacy (Farrar, Straus & Giroux, 2018).