The JAPCC AI Handbook

Practical Considerations for the Warfighter

By Colonel (ret.)

By Col (ret.)

 Antonios

 Chochtoulas

, GRC

 AF

Joint Air Power Competence Centre (2023-2026)

By Lieutenant Colonel

By Lt Col

 Nakul

 Nayyar

, CA

 AF

Joint Air Power Competence Centre

Published:
 March 2026

Abstract

AI is already transforming modern warfare, but its true power lies in supporting, not replacing, human decision-making. As operations grow more complex and data-driven, leaders must understand both the capabilities and limits of AI. The JAPCC AI Handbook provides a clear, practical guide for commanders and decision-makers, cutting through hype to explain how AI works, where it adds value, and what risks it introduces. It is not about building AI, but about using it wisely, responsibly, and effectively in military operations.

Executive Summary

Artificial Intelligence (AI) is no longer a speculative or emerging technology; it is already shaping the character of modern warfare. The speed, scale, and complexity of contemporary military operations, particularly within Multi-Domain Operations (MDO), exceed human cognitive limits when addressed through traditional manual processes alone. AI-enabled systems offer the potential to process vast volumes of data, identify patterns at machine speed, and support faster, more informed decision-making. At the same time, AI introduces new technical, operational, ethical, and legal challenges that military leaders must understand in order to employ these capabilities responsibly and effectively.

The JAPCC AI Handbook: Practical Considerations for the Warfighter is designed to bridge the gap between highly technical AI literature and the practical needs of military commanders, staff officers, and decision-makers. It does not seek to turn its audience into AI engineers. Instead, it equips leaders with sufficient conceptual understanding to ask the right questions, set realistic expectations, evaluate AI-enabled systems, and integrate AI into military operations without undermining accountability, legality, or trust.

Purpose and Scope

The handbook addresses a critical need within NATO and partner nations: enabling informed leadership decisions on AI adoption in an environment characterised by rapid technological change, increasing data saturation, and accelerating decision cycles. Many AI initiatives fail not because of technical shortcomings, but because decision-makers lack a clear understanding of what AI can and cannot do. This handbook therefore focuses on:

  • Explaining core AI and machine learning (ML) concepts in accessible, non-mathematical terms.
  • Demonstrating how AI systems are developed, trained, evaluated, and deployed.
  • Highlighting realistic military use-cases across intelligence, operations, logistics, cyber, and autonomous systems.
  • Identifying limitations, risks, and failure modes inherent to AI-enabled decision support.
  • Addressing ethical, legal, and governance considerations central to NATO values and international law.

By grounding AI discussion in operational realities rather than hype, the handbook enables leaders to distinguish between credible capability and marketing-driven claims.

Understanding AI as a Military Tool

A central theme of the handbook is that AI is best understood as a data-driven decision-support tool, not an autonomous replacement for human judgment. Contemporary military AI systems are overwhelmingly examples of Narrow AI: systems optimised for specific tasks such as image recognition, anomaly detection, language translation, or predictive maintenance. While concepts such as Artificial General Intelligence (AGI) and superintelligent AI attract public attention, they remain theoretical and are not relevant to current operational planning.

The handbook explains how modern AI systems, particularly those based on ML and deep learning, derive their capabilities from data rather than explicit programming. This distinction has profound implications for military use. AI performance depends directly on data quality, representativeness, and relevance to the operational environment. As a result, AI systems can reflect biases, amplify errors, or fail unpredictably when exposed to conditions outside their training data.

Understanding this dependency allows commanders to better assess risk, demand transparency from vendors, and avoid over-reliance on automated outputs.

From Concept to Capability: The Machine Learning Pipeline

To demystify AI development, the handbook introduces the ML pipeline, a structured end-to-end process that transforms raw data into an operational capability. This includes:

  • Defining the operational problem and determining whether AI is an appropriate solution.
  • Collecting, labelling, and preparing data suitable for modelling.
  • Selecting and training models aligned with mission requirements.
  • Evaluating performance using meaningful operational metrics.
  • Deploying models responsibly and monitoring them over time.

This framework highlights that AI success is as much an organisational and human challenge as a technical one. Domain expertise, interdisciplinary collaboration, and sustained oversight are essential. Military personnel, often serving as domain specialists, play a decisive role in shaping AI systems that are operationally relevant, trustworthy, and aligned with commander intent.

Operational Opportunities and Risks

The handbook surveys current and emerging military applications of AI, including data fusion, ISR, autonomous systems, logistics optimisation, cyber defence, and decision-support for command and control. Within these processes, AI can enhance speed, reduce workload, and enable decision advantage when employed appropriately.

However, the handbook gives equal emphasis to limitations and risks, including:

  • Data bias and incomplete situational representation.
  • Algorithmic opacity and limited explainability.
  • Automation bias and over-trust in machine outputs.
  • Vulnerability to adversarial manipulation and deception.
  • Challenges in testing, validation, and certification (especially for adaptive systems).

These risks are not theoretical. In military contexts, they can contribute to misidentification, escalation, or unintended operational consequences. The handbook therefore stresses the importance of human-in-the-loop or human-on-the-loop control, rigorous validation, and conservative assumptions when deploying AI in high-stakes environments.

Ethics, Law, and Responsible Use

AI adoption in military operations cannot be separated from ethical and legal obligations. The handbook reinforces that compliance with International Humanitarian Law (IHL), NATO principles, and national legal frameworks remains the responsibility of human commanders. AI systems do not bear accountability; humans do.

Key ethical considerations addressed include meaningful human control, transparency, accountability, proportionality, and the dual-use nature of AI technologies. The handbook situates military AI within ongoing international discussions on governance and norms, underscoring NATO’s role in promoting responsible use while maintaining strategic advantage.

Strategic Value

Beyond immediate operational utility, the handbook positions AI literacy as a strategic imperative. Adversaries are actively developing and exploiting AI-enabled capabilities, including disinformation, autonomous systems, and cyber operations. A failure to understand AI, both its power and its limits, risks strategic surprise and loss of credibility.

By fostering informed leadership, organisational learning, and realistic expectations, this handbook supports NATO’s long-term readiness. It empowers commanders to integrate AI as a force multiplier rather than a liability, ensuring that innovation proceeds in step with responsibility, legality, and operational effectiveness.

Author
Colonel (ret.)
 Antonios
 Chochtoulas
Joint Air Power Competence Centre (2023-2026)

Colonel Chochtoulas graduated from the Hellenic Air Force (HAF) Academy in 1999, with a degree in Logistics. He holds a Master of Science in Computer Science from the Hellenic Open University, and his subject matter expertise is in the Areas of Information Systems Security and Database Design and Administration. He initially served as a programmer and thereafter as a Database and System Administrator of HAF’s proprietary Logistics Information System in Elefsis Air Base. His previous assignment was at the HAF Supply Depot as Director of the IT Department. While at the JAPCC, he was the Cyberspace SME. Now he works as an Information Security Officer at the European Union Agency.

Information provided is current as of March 2026
Author
Lieutenant Colonel
 Nakul
 Nayyar
Joint Air Power Competence Centre

Lieutenant Colonel Nakul Nayyar joined the Royal Canadian Air Force in 2004 as a communications and electronics engineer. He most recently served as the Canadian Communications & Electronics Defense Attaché in Washington, D.C. (2020 – 2024). In this role, he provided expertise, assistance, and advice in the information and Cyber Defence domains affecting the US-Canada bilateral defence sector partnership. Concurrently, he served as Permanent Secretary of the Combined Communications Electronics Board (2021 – 2024), leading modernisation efforts for Five Eyes (FVEY) digital interoperability between Australia, Canada, New Zealand, the United Kingdom and the USA. Academically, Lieutenant Colonel Nayyar holds a bachelor’s degree in electrical engineering from the University of Toronto and two master’s degrees in business administration and defence studies from the Royal Military College of Canada. He currently serves as a Cyber SME at JAPCC, leveraging his experience to enhance NATO’s cyber capabilities.

Information provided is current as of March 2026
Author
Major
 Lucas J.
 Stensberg
Joint Air Power Competence Centre

Major Stensberg is a space and cyber SME in the JAPCC’s C5ISR & Space branch, furthering the Alliance’s understanding of the two domains via concept development, exercises, wargames, doctrine, and training. Before this role, he served in the US Space Force’s Enterprise Talent Management Office, and prior to that, as a Cyber Operations Planner at Headquarters 16th Air Force. There, he aligned strategies with US Cyber Command and notably the newly stood-up US Space Command. Other previous assignments include Flight Commander of Tactical Communications for the 485th Intelligence Squadron, managing C4ISR capabilities for 29 partner nations and over 900 intelligence analysts, as well as Integrated Project Management supporting the 694th ISR Group in Osan, Republic of Korea. Major Stensberg commissioned as a Cyberspace Operations Officer in 2016 from the United States Air Force Academy. His formative years were spent undergoing cyberspace warfare training at Keesler Air Force Base.

Information provided is current as of March 2026
Author
Ms
 Laura
 SamsĂ³ PericĂ³n
Centurion Technologies Consulting LLC

Laura SamsĂ³ PericĂ³n is a researcher and strategist focused on cyber resilience, AI-enabled autonomy, and trusted human-machine teaming, including MUM-T and high-altitude platform concepts. With 15+ years of international experience across civil and defence environments and multi-domain operational settings, her work examines AI-enabled decision-making in manned-unmanned teaming (MUM-T) contexts and cognitive and neuro-adaptive technologies. She previously served as an Executive Vice President and co-founded a technology venture, adding business and leadership depth to her technical expertise. She has also contributed to regulatory and standards bodies, alongside expert groups and international defence forums addressing autonomy, cyber resilience, and emerging technologies. As Founder of Synarea Insights, she advances an assurance-by-design approach – integrating cyber, autonomy, AI, HMI / BCI, ethics, and secure human integration – to strengthen trust and mission performance in future capabilities in defence operations. Her background in Electronics and Aerospace, plus expedition leadership in high-challenge field environments, informs her perspective on resilience, trust and decision-making under uncertainty.

Information provided is current as of March 2026

Contact Us

Contact Information

Joint Air Power Competence Centre
Römerstrasse 140
47546 Kalkar
Germany

+49 (0) 2824 90 2201

Request for Support

Please leave us a message

Contact Form