Executive Summary
Artificial Intelligence (AI) is no longer a speculative or emerging technology; it is already shaping the character of modern warfare. The speed, scale, and complexity of contemporary military operations, particularly within Multi-Domain Operations (MDO), exceed human cognitive limits when addressed through traditional manual processes alone. AI-enabled systems offer the potential to process vast volumes of data, identify patterns at machine speed, and support faster, more informed decision-making. At the same time, AI introduces new technical, operational, ethical, and legal challenges that military leaders must understand in order to employ these capabilities responsibly and effectively.
The JAPCC AI Handbook: Practical Considerations for the Warfighter is designed to bridge the gap between highly technical AI literature and the practical needs of military commanders, staff officers, and decision-makers. It does not seek to turn its audience into AI engineers. Instead, it equips leaders with sufficient conceptual understanding to ask the right questions, set realistic expectations, evaluate AI-enabled systems, and integrate AI into military operations without undermining accountability, legality, or trust.
Purpose and Scope
The handbook addresses a critical need within NATO and partner nations: enabling informed leadership decisions on AI adoption in an environment characterised by rapid technological change, increasing data saturation, and accelerating decision cycles. Many AI initiatives fail not because of technical shortcomings, but because decision-makers lack a clear understanding of what AI can and cannot do. This handbook therefore focuses on:
- Explaining core AI and machine learning (ML) concepts in accessible, non-mathematical terms.
- Demonstrating how AI systems are developed, trained, evaluated, and deployed.
- Highlighting realistic military use-cases across intelligence, operations, logistics, cyber, and autonomous systems.
- Identifying limitations, risks, and failure modes inherent to AI-enabled decision support.
- Addressing ethical, legal, and governance considerations central to NATO values and international law.
By grounding AI discussion in operational realities rather than hype, the handbook enables leaders to distinguish between credible capability and marketing-driven claims.
Understanding AI as a Military Tool
A central theme of the handbook is that AI is best understood as a data-driven decision-support tool, not an autonomous replacement for human judgment. Contemporary military AI systems are overwhelmingly examples of Narrow AI: systems optimised for specific tasks such as image recognition, anomaly detection, language translation, or predictive maintenance. While concepts such as Artificial General Intelligence (AGI) and superintelligent AI attract public attention, they remain theoretical and are not relevant to current operational planning.
The handbook explains how modern AI systems, particularly those based on ML and deep learning, derive their capabilities from data rather than explicit programming. This distinction has profound implications for military use. AI performance depends directly on data quality, representativeness, and relevance to the operational environment. As a result, AI systems can reflect biases, amplify errors, or fail unpredictably when exposed to conditions outside their training data.
Understanding this dependency allows commanders to better assess risk, demand transparency from vendors, and avoid over-reliance on automated outputs.
From Concept to Capability: The Machine Learning Pipeline
To demystify AI development, the handbook introduces the ML pipeline, a structured end-to-end process that transforms raw data into an operational capability. This includes:
- Defining the operational problem and determining whether AI is an appropriate solution.
- Collecting, labelling, and preparing data suitable for modelling.
- Selecting and training models aligned with mission requirements.
- Evaluating performance using meaningful operational metrics.
- Deploying models responsibly and monitoring them over time.
This framework highlights that AI success is as much an organisational and human challenge as a technical one. Domain expertise, interdisciplinary collaboration, and sustained oversight are essential. Military personnel, often serving as domain specialists, play a decisive role in shaping AI systems that are operationally relevant, trustworthy, and aligned with commander intent.
Operational Opportunities and Risks
The handbook surveys current and emerging military applications of AI, including data fusion, ISR, autonomous systems, logistics optimisation, cyber defence, and decision-support for command and control. Within these processes, AI can enhance speed, reduce workload, and enable decision advantage when employed appropriately.
However, the handbook gives equal emphasis to limitations and risks, including:
- Data bias and incomplete situational representation.
- Algorithmic opacity and limited explainability.
- Automation bias and over-trust in machine outputs.
- Vulnerability to adversarial manipulation and deception.
- Challenges in testing, validation, and certification (especially for adaptive systems).
These risks are not theoretical. In military contexts, they can contribute to misidentification, escalation, or unintended operational consequences. The handbook therefore stresses the importance of human-in-the-loop or human-on-the-loop control, rigorous validation, and conservative assumptions when deploying AI in high-stakes environments.
Ethics, Law, and Responsible Use
AI adoption in military operations cannot be separated from ethical and legal obligations. The handbook reinforces that compliance with International Humanitarian Law (IHL), NATO principles, and national legal frameworks remains the responsibility of human commanders. AI systems do not bear accountability; humans do.
Key ethical considerations addressed include meaningful human control, transparency, accountability, proportionality, and the dual-use nature of AI technologies. The handbook situates military AI within ongoing international discussions on governance and norms, underscoring NATO’s role in promoting responsible use while maintaining strategic advantage.
Strategic Value
Beyond immediate operational utility, the handbook positions AI literacy as a strategic imperative. Adversaries are actively developing and exploiting AI-enabled capabilities, including disinformation, autonomous systems, and cyber operations. A failure to understand AI, both its power and its limits, risks strategic surprise and loss of credibility.
By fostering informed leadership, organisational learning, and realistic expectations, this handbook supports NATO’s long-term readiness. It empowers commanders to integrate AI as a force multiplier rather than a liability, ensuring that innovation proceeds in step with responsibility, legality, and operational effectiveness.











