The JAPCC Conference 2019

Conference Proceedings

Theme 4 | Trusted Autonomy

The use of the term ‘cyber’ rather than ‘cyberspace’ was rightly cautioned against. The word ‘cyber’ is sometimes used as an all-encompassing term to include cyberspace, cyber-attacks, cyberwar etc. However, we risk being imprecise by using the adjectival prefix ‘cyber’ without a suffix. This comes back to General Townsend’s entreaty to strive to use the right words in the right context. This is something that is even more important to an alliance where everyone (native speakers included) may not have a complete encyclopaedic command of the common language.

Cyberspace is sometimes seen as a place inhabited and only fully understood by ‘bright young techno-geeks’ – a term that could be used to describe only a minority of conference delegates. There is a need for all decision-makers to educate themselves about this often misunderstood domain. Just as senior leaders grasped the nettle of learning more about space, so they must also do when it comes to cyberspace. A concept that can often be useful in determining the acceptability of computer-based technologies is that of ‘digital natives versus digital immigrants’. This comes back to the word ‘young’ in the earlier phrase ‘bright young techno-geeks’.

In their lifetimes, digital natives never knew a time when things like email, social media and the internet did not exist. The rest of us – the digital immigrants – still remember sending memos and reading about things that happened yesterday from quaint sheets of paper that made our hands dirty.

It is, perhaps, easy to jump to the conclusion that digital natives will be more accepting of technology making decisions on behalf of humans; that digital natives believe what the machine says, without questioning the decisions made – by AI – on our behalf. However, as one memorable exchange at the conference showed, our young people are well educated and more than happy to question not just the decisions made by machines but also the assumptions of their elders. This presents its own unique opportunity, whereby NATO leadership and staff (led by the NAC), being comprised of necessarily senior leadership, will be challenged to adapt to new technologies and leverage younger expertise. This theme of ‘trusted autonomy’ is explored in more detail later in this paper.

Discussions as a result of questions from the floor are often fertile ground for new ideas. One such discussion was based on the premise that, the ‘plethora of effects available to NATO’ demands that we get the C2 correct. Multi-domain C2 suitable to meet the demands of MDO can be likened to the OODA loop of a person riding a bike. Countless continuous inputs are being made with constant feedback far faster than human consciousness would allow. This is clearly more complex than deciding on which effect for which target, but it does illustrate the C2 dilemma in deciding (for example) whether to use an effect in the cyberspace domain to jam the lifting system of a bridge as opposed to bombing it. Whilst the first effect may enable NATO forces to use the unscathed bridge themselves to facilitate manoeuvre, the Battle Damage Assessment (BDA) dilemma is made much easier if the kinetic effect is chosen – it’s easier to identify a destroyed bridge than it is to identify a bridge that can’t be opened or closed because of some invisible disruption to its electronic control system.

As airmen, we must learn to move beyond the traditional ways of thinking which rely solely on kinetic force to degrade capabilities as the ‘old way of doing things’. Mastery of the cyberspace domain will enable NATO to be smarter and to use ‘new’ ways of doing things.

Exercises and wargames are the ideal places to test out such dilemmas and to learn lessons for these new ways of operating. However, as later conference panels observed, whilst training is mostly what NATO does, it is not always as realistic as it could be. MDO tailored training – both live and synthetic – is needed so that NATO can truly make the transition from lessons observed to lessons learned. Very cost-effective and realistic synthetic training has become a reality– through advances in AI and augmented reality (AR) supported by so-called ‘Big Data’. Big data has been characterized as the art of finding one particular snowflake in a blizzard. Again, the JAPCC is already taking a leading role in synthetic training. As this is being written, a team of JAPCC SMEs is providing OPFOR for Exercise TRIDENT JUPITER at Joint Warfare Centre (JWC) Stavanger.

Human-on-the-Loop versus Human-in-the-Loop

One of the key takeaways from the earlier theme of human factors and military culture was that humans will often differ in their willingness to allow technology to help them solve problems – particularly when, to do so, they must allow machines to have some level of autonomy. Sometimes, a lack of willingness to do this stems from a belief that autonomy is a case of ‘all or nothing’ – when, in actual fact, there exist clearly defined levels of autonomy and our daily lives already depend on them.

A 2016 article in Resilience Week by Nothwang et. al.6 is one of the early published uses of this phrase (in a military context) which also gives useful definitions and examples. The paper ‘investigates the contributors to success/failure in current human-autonomy integration frameworks, and proposes guidelines for safe and resilient use of humans and autonomy with regard to performance, consequence, and the stability of human-machine switching’. It classifies four levels of autonomy, and defines them, from lowest autonomy to highest autonomy, as:

  • Human – where ‘the human is actively involved in all aspects of an agent’s task’.
  • Human-in-the-loop (HITL). This is where humans ‘actively (often continuously) engage in control decisions’.
  • Human-on-the-loop (HOTL). This implies ‘supervisory control where the human monitors the operation of autonomy, taking over control only when the autonomy encounters unexpected events or when failure occurs’.
  • Complete autonomy (CA) where ‘the human has a minimal task load for decision-making, is not the ultimate arbiter on decisions, and is only minimally involved in agent decision-making’.

These definitions are of use to us, not least in furthering our understanding of when the HITL/HOTL terms are being used incorrectly.

Nahavandi’s 2017 paper – ‘Trusted Autonomy between Humans and Robots’ takes our understanding one stage further when it states that:

‘Machines that carry out a task for a time period, then stop and wait for human commands before continuing are known as “HITL systems”, while machines that can execute a task completely but have a human in a monitoring or supervisory role, with the ability to interfere if the machine fails, are known as HOTL systems.’7

This is not science fiction. These machines are well known to us. That annoying disclaimer that pops up every time we try to use our in-car satnav (and demands that we press our grubby fingers on a screen icon before it will deign to continue) is a crude example of a HITL system. Commercial airline pilots can rely on their aircraft to auto land at suitably equipped airports, but they remain in their seats alert and ready (we hope) to intervene if something out of the ordinary happens. This is a HOTL system. HOTL technology with military applications is becoming increasingly relevant with the advent of autonomous wingmen and swarming concepts.

It is, perhaps, the concept of CA – complete autonomy – that scares us and brings to mind the ‘killer robots’ scenario so beloved of the movies. Robotics and autonomous systems (RAS) are the emergent technologies of the 21st Century and, as yet, may not be widely understood. RAS is enabled by artificial intelligence and machine learning and these are things that concern some of the great minds of our time. The recently deceased cosmologist, Professor Stephen Hawking sounded this note of caution as recently as 2014:

‘The development of full artificial intelligence could spell the end of the human race … It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.’8

Whilst we should heed Professor Hawking’s warning, we should not let it prevent us from harnessing AI and machine learning to help NATO to ensure the collective defence and security of our Alliance.

The JAPCC conference 2019 was sponsored by