When AI Meets the Laws of War

AI is revolutionizing how wars are waged with enhanced precision, but a lack of oversight in decision-making in warfare raises ethical and legal concerns, writes Carlos Batallas.

Listen to this Article

Artificial intelligence is poised to revolutionize military operations across domains, from autonomous weapons systems and decision-support tools to cyber warfare capabilities. These technologies promise enhanced precision, reduced human casualties, and improved strategic decision-making. However, they also raise profound questions about the nature of warfare and the adequacy of existing legal frameworks.

A prominent development is the integration of AI-based decision support systems in military decision-making processes. These tools are designed to assist with complex decisions, such as target selection and collateral damage assessments, by analyzing large datasets and providing recommendations.

Autonomous weapon systems represent another significant development in AI-enabled warfare. Unlike AI decision support systems, which support human decision-making, autonomous weapon systems can select and engage targets without human intervention once activated. This autonomy in critical functions raises unique ethical and legal concerns, prompting debates about their compatibility with International Humanitarian Law principles.

Opportunities Presented by AI in Warfare

AI offers several significant advantages in modern warfare, one key benefit being enhanced decision-making and strategic planning. AI decision support systems offer military commanders the ability to process vast amounts of data in real time, enabling faster, more informed decisions. This is particularly valuable in high-pressure, time-sensitive environments where the sheer volume of information can overwhelm human decision-makers.

Another promising opportunity is the potential for enhanced precision in targeting. AI systems, including some forms of autonomous weapons, can process complex datasets to identify and differentiate between military and civilian targets more effectively than humans in certain scenarios. This increased precision has the potential to significantly reduce the risk of unintended civilian casualties and improve compliance with the principle of distinction under International Humanitarian Law.

AI can also play a pivotal role in minimizing civilian harm through improved precautionary measures. By creating real-time maps of civilian infrastructure and performing ongoing risk assessments, AI can help military forces take all feasible steps to avoid or minimize incidental civilian harm. This application of AI aligns with and enhances the precautionary measures required by international law.

Challenges to International Humanitarian Law

Despite these opportunities, the core principles of International Humanitarian Law — distinction, proportionality, precaution, and military necessity — are significantly challenged in the age of AI-enabled warfare.

The principle of distinction is tested by AI’s involvement in warfare. While AI’s enhanced data processing capabilities can theoretically improve targeting accuracy, there is a risk that biases in training data can lead to misidentification. This risk is particularly acute with autonomous weapon systems, where misclassification could result in direct attacks on civilians or civilian objects.

Proportionality assessments also turn more complex with AI systems, which can assess vast amounts of data to estimate the balance between military advantage and potential civilian harm. However, the ‘black box’ nature of many AI models makes it difficult to explain or validate these calculations, complicating decision-making and undermining accountability. For autonomous weapon systems, the challenge is even greater, as these systems must make real-time proportionality assessments without human input.

The principle of precaution faces its own challenges. AI-based decision support systems can assist by providing real-time risk assessments and target verification. However, the risk of ‘automation bias’ — where human operators may over-rely on AI outputs without proper critical evaluation — poses a significant hurdle. With autonomous weapon systems, ensuring that all feasible precautions are taken becomes more complex due to the absence of real-time human judgment.

Lastly, while AI’s enhanced decision-making and real-time data processing allow militaries to achieve legitimate objectives more efficiently, adhering to the principle of military necessity requires careful consideration. The use of AI in this context must be carefully balanced against the other principles of International Humanitarian Law to ensure lawful conduct.

The Problem of Algorithmic Bias

Algorithmic bias presents a critical concern in military AI systems, as it can skew decision-making processes. Biases in AI decision support systems and autonomous weapon systems emerge from multiple stages, including data curation, model development, and system use. In military contexts, these biases could lead to discriminatory outcomes, undermining the principle of non-discrimination embedded in International Humanitarian Law.

Legal and Ethical Considerations

The integration of AI into warfare requires a thorough examination of existing legal frameworks and may necessitate new regulations. Several issues warrant careful consideration. To start, determining accountability for actions taken by AI systems – particularly autonomous weapons systems – is complex, and traditional notions of command responsibility may need to evolve to address the semi-autonomous or fully autonomous nature of these systems.

Transparency and explainability are also significant concerns. The ‘black box’ nature of many AI algorithms poses challenges for transparency in military decision-making. This is particularly problematic for autonomous weapons systems, where the rationale behind targeting decisions may be difficult to ascertain after the fact.

Meaningful human oversight over AI systems is critical to ensure ethical and lawful decision-making in warfare. The role of human operators should not be reduced to mere rubber-stamping of AI recommendations. With autonomous weapons systems, the question of how to maintain meaningful human control becomes even more pressing.

Experts emphasize that preserving human judgment in military decision-making on the use of force is crucial to reducing humanitarian risks, addressing ethical concerns, and facilitating compliance with International Humanitarian Law. This principle is particularly challenged by autonomous weapons systems, which are designed to operate without direct human control.

Adapting International Humanitarian Law for the AI Era

As AI continues to evolve, so too must the legal frameworks that govern its use in warfare. Several steps can be taken to address the challenges and harness the opportunities posed by AI to International Humanitarian Law:

  1. International Cooperation: Multilateral discussions and agreements on the ethical use of AI in warfare are essential. This includes ongoing debates about potential regulations or bans on certain types of autonomous weapons systems.
  2. Comprehensive Legal Reviews: States must conduct thorough legal reviews of AI-enabled weapons systems to ensure compliance with International Humanitarian Law. These reviews should consider the potential for algorithmic bias and ensure rigorous testing before deployment.
  3. Human-Centric AI Design: AI systems should be designed to support, not replace, human decision-makers. For autonomous weapons systems, this might involve designing systems with varying levels of autonomy that can be adjusted based on the operational context.
  4. Mitigating Bias: Developers must prioritize bias mitigation strategies throughout the AI system lifecycle, from data curation to post-deployment review.
  5. Contextual Constraints: It may be necessary to place certain constraints on the use of AI decision support systems and autonomous weapons systems in decisions relating to the use of force, including restricting their use to certain tasks or contexts.
  6. Ongoing Research and Dialogue: Further research and dialogue are needed to better understand the measures and constraints required in the design and use of AI in military applications.

The integration of AI into warfare presents both opportunities and challenges for International Humanitarian Law. While AI has the potential to enhance compliance with International Humanitarian Law principles through improved decision-making and precision, it also raises complex legal and ethical questions. As we navigate this new frontier, the international community must collaborate to ensure that AI technologies, including autonomous weapon systems, align with the core principles of humanity and justice that underpin international law. By maintaining human oversight, addressing issues such as algorithmic bias, and implementing appropriate constraints, we can harness AI’s potential to minimize human suffering in conflict while upholding the rule of law.

 

© IE Insights.

WOULD YOU LIKE TO RECEIVE IE INSIGHTS?

Sign up for our Newsletter

Newsletter Subscription