Terabytes of Unprocessed Data or Superior Pieces of Info

Turning Airborne ISR into Multi-Domain Operations

By Colonel

By Col

 Gianmarco

 Di Loreto

, IT

 AF

By Lieutenant Colonel

By Lt Col

 Roberto

 Diana

, IT

 AF

Published:
 June 2022
 in 
Warfare Domains: Multi-Domain Operations

ISR: from Data to Comprehension

The info-operational environment in which we are immersed is characterized by conflicts that span the entire spectrum of the competition continuum,1 including all possible combinations of conventional, asymmetric and hybrid operations.

Our military organizations have faced the changing intelligence and C2 environments by evolving a specific guideline: enhancing the information and decision-making processes and progressive decentralization. This approach was based upon a powerful assumption: information enables understanding, and understanding enables decision-making.

The Italian Air Force (ITAF), as with other Air & Space Components, follows this guideline, especially in the ISR field. As a pioneer in this evolution, it has discovered counter-intuitive evidence: the more both quality and quantity of information are improved and decentralized, the more evident it becomes that information doesn’t necessarily enable understanding, and understanding doesn’t enable decision advantage.

Hopping into the ‘Rabbit Hole’: a Theoretical Approach to ISR

One of the most interesting passages of J. Boyd’s thinking is when he identifies the ‘Synthesis-Analysis’ or ‘Induction-Deduction’ interaction2 which is the starting point of the understanding process we use ‘to develop and manipulate concepts to represent observed reality’.3 Orienting the following Decisions and Actions, this idea brings us to the so-called ‘induction problem’, long-debated before Boyd on how many observations are required to arrive at a synthesis capable of predicting how a scenario will develop (in order to orient decisions and actions)? One, ten, a hundred, a thousand?

K. Popper stated that ‘the belief that we can start with pure observations alone […] is absurd’,4 because ‘Observation is always selective. […] It needs a chosen object, a definite task, a point of view, a problem’;5 thus, it is impossible to understand reality inductively. In Popper’s view, the creative, intuitive element is at the beginning of any attempt at understanding, so even if we are not directly aware of it, the OODA loop never really starts from an observation.

Proceeding with the thought process, we can hence identify a more ­realistic (i6)(a7)O(s8)ODA loop: there is always an ‘Ideate’ phase before an observation, even if implicit or hidden. In his words, ‘[…] it is the […] ­theory which leads to, and guides, our systematic observations […]. This is what I have called the “searchlight theory of science”, the view that ­science itself throws new light on things; that […] it not only profits from observations but leads to new ones.’9

Figure 1: (i)(a)O(s)ODA loop.

Before an Observation, another process also intervenes where we start from an idea, a postulate, or a general theory and then we draw conclusions on the phenomenon that should logically derive from the initial idea. K. Popper identified this process and we may refer to it as the ‘deduct’ or ‘analyse’ phase, yet bearing in mind there is an essential distinction from the term used by J. Boyd. For K. Popper, deduction precedes observation, as ‘without waiting, passively, for repetitions to impress or impose regularities on us, we actively try to impose regularities upon the world. We try to […] interpret it in terms of laws invented by us’.10

Only then, can the observation phase start fulfilling its core role, namely disproving our assumptions. ‘These may have to be discarded later, should observation show that they are wrong. This was a theory of trial and error, of conjectures and refutations’.11 The most important information is the information that falsifies the hypotheses, inspiring the most correct decisions. The rest could be useless data at best, toxic details at worst. Finally, as anticipated at the beginning of the chapter, the ‘Synthesize’ phase comes into play to enable the Orient phase.

So, why is this (i)(a)O(s)ODA loop so difficult?

Process of Understanding Human Cognitive Bias Barriers

The last few decades of progress in cognitive psychology allowed us to identify the main biases hindering our process of understanding. From the most famous ‘confirmation bias’ to the ‘WYSIATI12 bias’ to the inability to correctly frame statistical problems (i.e. regression to the mean13 and law of small numbers14), ending with heuristics and other biases (i.e. substitution15 cause and chance,16 affect17).

Furthermore, as humans, we cannot reliably convert information because:

  • we tend to underestimate the chance and irrationality of occurrences;
  • we often fall into the ‘narrative fallacy’ trap;18
  • we are at the mercy of the ‘ludic fallacy’, which consists of comparing risks and opportunities derived from chances similar to those of gambling.19

Finally, among other powerful human biases, we should not forget Taleb’s ‘round-trip fallacy’,20 meaning the systemic logic confusion between statements made in similar terms but with totally different meanings.21

The analysis of cognitive biases helped us identify why the starting assumption22 is now at stake. Before the digital revolution and the rise of high-density info-ops environments, human cognitive biases were thwarted by military-specific organizational workarounds: a centralized and pyramidal model with an interaction at the top between a Commander and their Headquarters. This model successfully stood the test of time. It was the filter of the different hierarchy levels and the dialectic between the two figures that mitigated, most of the time, the cognitive biases leading to potentially flawed decisions.

As we previously said, the advent of the digital revolution led us to think that the consequential huge information density could be managed by decentralizing and accelerating the decision-making process at ‘the speed of relevance’.23 Nonetheless, it is a partial solution that brings to the table an even greater issue, which in doing so; we lose an effective dam against cognitive biases.

Obstacles to Understanding: AI24 is Not the Silver Bullet

Given the framework described in the previous paragraph, great expectations25 are imposed upon the use of Artificial Intelligence (AI) in the ISR and decision-making processes. We must be aware of the risk that while trying to avoid human biases, we could fall boldly into those biases typical of AI. AI biases have the potential to be even more dangerous and subtle than human biases, which are categorized into two distinct areas:

  • AI predictions26,27 always represent pre-existing data processing and thus are blind to novelty and exceptions (again…the induction problem)! They will always be a future version of…the past. This means that even the best AI algorithm, if not properly handled, could be of little use when we need it the most (i.e. to prevent a surprise on the battlefield).
  • Human biases can be ‘exported’ in their entirety into the AI tool that we are counting on (i.e. coding) to rescue us from those very same biases. This risk, theoretically identified by K. Popper almost a century ago28 has materialized today with particularly harsh social consequences.

ISR: A New Paradigm to Orient Decisions and Actions

For information to lead to a decision advantage, airborne ISR must be ­enlightened by new awareness: current organizational and training models have noticeable limits.

The conceptual guideline behind our intelligence and C2 process is based upon the necessity to accelerate and decentralize decision-making: static Command and Control chains are outdated tools and need to be replaced by web chains capable of adapting rapidly and autonomously based on a single priority: mission understanding and operational environment comprehension. To realize such a change, it is necessary (both conceptually and technically) to transform the quantity of information into quality of understanding. Although easier said than done, ‘technological capabilities depend on complementary institutions, skills and doctrines’;29 thus, new skills must be developed so that the contribution of AI reduces, rather than increases, the potential effects of toxic information. Furthermore, militaries must be informed and trained cognitive psychology to effectively be able to diagnose human intelligence biases and leverage AI to compensate for our weaknesses.

The human element could then fully exploit Big Data and AI, properly ­assisted by ‘graceful degradation’ systems,30 becoming the main character in designing new theories, hypotheses, and scenarios to orient Analysis and Observation, detecting what is ‘normal’ (confirmation) and what is ­potentially an ‘anomaly’ (refutation).

These ‘anomalies’ will be our ‘superior pieces of information’, allowing us to predict events evolving along completely new and previously unknown scenarios (i.e., Machine Learning irrelevant). To identify them, we must train and utilize the human mind for its most peculiar and irreplaceable expertise: emotional intelligence, creativity, empathy, ability to consider elements of irrationality, randomness, and madness.31 Characteristics are ultimately aimed at ‘creating’ and ‘identifying’ exceptions, and hence predictions.

To conclude, to find ‘superior pieces of info’ starting from terabytes of raw and unprocessed data, it is necessary to exit from the legacy dichotomy between human and artificial intelligence. We must bring the human back to the centre toward forms of ‘humanly enhanced Artificial Intelligence’, or the so-called human-machine teaming.32 Machine augmentation will ultimately forge a more cognizant human being.33

‘A world of enduring competition conducted through a mixture of cooperation, competition below armed conflict, and armed conflict’. Joint Chiefs of Staff, Joint Doctrine Note 1-19 ‘Competition Continuum’ (3 June 2019), p. V.
Frans P. B. Osinga, Science, Strategy and War – The Strategic theory of John Boyd, Routledge, Eburon Academic Publishers, Delft 2005, pp. 177–179.
Ibid., p. 177.
Popper Karl R., Conjectures And Refutations, Basic Books, Publishers New York London, 1962, p. 47.
Ibid.
Ideate.
Analyze.
Synthesize.
Ibid., p. 127–8.
Ibid., p. 47.
Ibid.
What You See Is All There Is: tendency to focus on available information rather than expanding the field of view to include ­unavailable information or data that doesn’t support our thesis.
Daniel Kahneman, Thinking, Fast and Slow, Farrar, Straus and Giroux, New York, p. 201.
Ibid., p. 125.
Ibid., p. 112. Refers to the cognitive process of substituting complex judgements with more linear mental shortcuts (i.e., to access information stored in memory).
Ibid., p. 131. Refers to the cognitive process that always looks for a cause-effect relation between events without accepting the randomness of reality.
Ibid., p. 119. Refers to the dominance of conclusions over arguments that tends to occur when we let emotions determine our beliefs.
Nassim Nicholas Taleb, The Black Swan, Random House Trader Paperbacks, New York, p. 107.
Ibid., p. 181.
Ibid., p. 94.
In the Intelligence field, we perfectly know the difference between saying ‘there is no evidence consistent with the hypothesis’ and ‘there is evidence that the hypothesis is not consistent’, do we? Like for a doctor stating that ‘there is no evidence of a tumor reappearance’ is way different from stating ‘there is evidence that the tumor has disappeared’. Still, we very often fall victim of this fallacy.
Information enables understanding and understanding enables decision-making.
US Department of Defense, National Defense Strategy, 2018.
For the scopes of this research, with the term ‘AI’ the authors intend ‘narrow AI’, or Machine Learning.
See, among others, A. Goldfarb and J. Lindsay, Prediction and Judgement – Why Artificial Intelligence Increases the Importance of Humans in War, International Security 46: 3, 2022.
On the topic of AI and decision-making, an interesting insight is given by A. Goldfarb and J. Lindsay, Ibid., pp. 8–9: ‘Rapid advances in machine learning have improved statistical prediction, but prediction is only one aspect of decision-making. Two other important elements of decision-making – data and judgement – represent the complements to prediction. […] When quality data are available and an organization can articulate clear judgements, then AI can improve decision-making.’
Another appealing insight on AI decision-making is given by S. Fortmann-Roe and Scharre: ‘[…] If the data behind the AI system are incomplete or biased, the quality of decision-making is degraded. Adversaries might be able to corrupt the data or hack into the AI system itself’ in RAND, Military Trends and the Future if Warfare, Chapter Seven, Trend 6: AI as a Class of Potentially Disruptive Technologies, p. 63, ISBN: 978-1-9774-0297-4.
‘We may consider the idea of building an induction machine. Placed in a simplified “world” (for example, one of sequences of coloured counters) such a machine may through repetition “learn”, or even “formulate”, laws of succession which hold in its “world”. […] In constructing an induction machine, we, the architects of the machine, must decide a priori what constitutes its “world”. In other words, we must build into the machine a framework determining what is relevant or interesting in its world: the machine will have its “inborn” selection principles. The problem of similarity will have been solved for it by its makers who thus have interpreted the “world” for the machine.’ Popper Karl R., Ibid., p. 48.
Ibidem, p. 10.
The ability of a system to highlight to the human operator an error in the nominal parameters on which it is designed (potentially affecting the end result and/or the overall performance) and also capable of avoiding an exponential and non-manage­able degradation.
‘The role of ‘genius’ in mission command becomes particularly important, and particularly challenging, in modern combined arms warfare and multi-domain operations’. Biddle, Military Power, in A. Goldfarb and J. Lindsay, Ibid., p. 24.
A. Goldfarb and J. Lindsay, Ibid., p. 12 ‘[…] For intelligence and operational tasks that have quality data but difficult judgements, teams of humans and machines can distribute the cognitive load of decision-making. We expect many if not most military AI tasks (including ISR, authors’ note) to fall […] into the category which we describe as human-machine teaming’.
Of note, to the same general conclusion came A. Goldfarb and J. Lindsay, Ibid., p. 9: ‘Increasing reliance on AI […] will make human beings even more vital for military power, not less.’
Content Navigation
Author
Colonel
 Gianmarco
 Di Loreto

Colonel Gianmarco Di Loreto joined the Air Force Academy in 1997, where he graduated as 1st of his class. On completion of NATO Flying Training in Canada (NFTC) School, he flew Electronic Warfare TORNADO fighter jet. In 2010 at Edwards AFB (US), he became an Experimental Test Pilot and accumulated 10 years of experience in the field of acquisition and aircraft experimentation on more than 80 different aeroplanes and helicopters. Among other duties, he completed the first transoceanic crossing of the F-35 and was a demo pilot of the C-27J Spartan transport aircraft. He attended the Senior Course at the NATO Defence College in 2019, graduated ‘with honour’ and won the Eisenhower Prize with his academic team. After being Section Chief of the General Planning and Transformation Office in the Aerospace Planning Department, he then became Vice Head of the Office, responsible for the coherent development of the AF Main Capability Areas. He is currently assigned to the Ministry of Defence Cabinet – Office for Military Policy.

Information provided is current as of July 2022
Author
Lieutenant Colonel
 Roberto
 Diana

Lieutenant Colonel Roberto Diana joined the Air Force in 1999 as an Intelligence Officer, graduated in Political Science and was awarded the ‘Badge of Honour’ at the end of the Officers Course. Assigned to the Italian Defence General Staff, he completed his Intelligence training and gained an extensive operational experience in key Intelligence positions in Italy, Iraq, Afghanistan, Kuwait and in the United States. He attended the Italian Air Force War College in 2015 and the ‘Ecole de Guerre’ (Paris) in 2019, where his dissertation ‘The Black Swan Theory facing History: Garibaldi and Expedition of the Thousand’ was awarded among the School’s best research works.

Information provided is current as of July 2022

Other Essays in this Read Ahead

Global Competition

The Origin of the Term and its Use in Policy Statements

The New Era of Great Power Competition

Emerging Patterns and Principles

The Cyber and Information Domain and the Space Domain: Links and Interdependencies

Thresholds in Cyber and in Space

Lessons Learned from State Positions on the Application of International Law to Cyber for the Evolving Space Domain

Deterrence in the Space Domain

Increasing Deterrence in Space by Gaining a Mindset for Agile Space Operations

Developing an Operational Framework to Enable Interoperable Allied NATO Responsive Space Activities

Space Capabilities that are Allied by Design

Adversarial Machine Learning

A Threat to NATO Missions

The Multi-Domain Combat Cloud in Light of Future Air Operations

An Enabler for Multi-Domain Operations

Sharing Cyber Capabilities within the Alliance

Interoperability Through Structured Pre-Authorization Cyber

Contact Us

Contact Information

Joint Air Power Competence Centre
Römerstrasse 140
47546 Kalkar
Germany

+49 (0) 2824 90 2201

Please leave us a message