The Department of the Navy (DON) 30-Year Research and Development (R&D) Plan (distribution D), approved in January 2017, projects the key battlespace technological concepts. In 2025, these concepts are projected to extend from known systems, while by 2035 and 2045, the variety of these concepts are projected to be replaced by a single technological framework, including:
- Command-guided robotic-augmentation swarms.
- Swarm of swarms artificial intelligence (AI) warfare.
However, this article proposes that command-guided swarm (CGS) technology may be achievable much sooner, perhaps by the middle of the next decade, a full decade or two ahead of the DON R&D Plan. The two nations now dominating the AI field are, not surprisingly, the United States and China. If China fields single-human-operator CGS technology in 2025, a full decade before the DON intends, the U.S. position in multidomain warfare may be decisively compromised. This article also discusses an approach for R&D during the coming decade of a cyber-physical CGS.
BACKGROUND AND HISTORICAL PERSPECTIVE
A CGS is a multisensor, multiweapon, multiplatform, single-human-operator system-of-systems (SoS). The SoS is a multidomain force comprising multiple unmanned domain systems (UxS) (with x equaling space, air, ground, surface, or undersea), under the mission-oriented tactical coordination of a single human operator or swarm tactician-supervisor. The SoS is equipped for sensing plus kinetic/nonkinetic fires that, in concert, function as a single Warfighter’s engagement capability. A single-human-operator CGS is a natural evolutionary end state of the original conception of a multisensor/multiweapon SoS discussed in the 1996 milestone paper “The Emerging U.S. System-of-Systems,” by Adm. William Owens, Vice Chairman, Joint Chiefs of Staff .
If China fields single-human-operator CGS technology in 2025, a full decade before the DON intends, the U.S. position in multidomain warfare may be decisively compromised.
Adm. Owens discusses a revolution in military affairs for intelligence, surveillance, and reconnaissance (ISR) and command, control, communications, computers, and intelligence (C4I). The concept consists of ISR (sensing and collection), advanced C4I (converting sensor awareness to battlespace understanding and mission formulation), and precision force (the resultant weapon control). Adm. Owens writes:
It is easy to miss the powerful synergy which exists between ISR, advanced C4I and precision force . . . . We tend to plan, program and budget for these things as if they were discrete capabilities. We are more adept at seeing the individual trees than that vast forest of a military capability which the individual systems, because of their interactions, are building for our fighting forces.
The concept of a multisensor/ multiweapon SoS was further clarified in 1998 by the late Vice Adm. Arthur Cebrowski—former president of the Naval War College and later director of the Department of Defense (DoD) Office of Force Transformation—with John Garstka in their seminal paper on network-centric warfare (NCW) . To illustrate NCW, the authors discuss the Cooperative Engagement Capability (CEC), which is a multisensor data fusion system for surface ship air and missile defense that processes radar data from individual cooperating platforms and provides each cooperating platform the composite air track information. CEC also includes cooperative integrated fire control. Hence, Cebrowski and Garstka equate NCW with multisensor data fusion and with multi-weapon control data diffusion. They write:
At the structural level, network-centric warfare requires an operational architecture with three critical elements: sensor grids and transaction (or engagement) grids hosted by a high-quality information backplane . . . . Sensor grids rapidly generate high levels of battlespace awareness and synchronize awareness with military operations. Engagement grids exploit this awareness and translate it into increased combat power.
The cooperative engagement capability (CEC) combines a high-performance sensor grid with a high-performance engagement grid. The sensor grid rapidly generates engagement quality awareness, and the engagement grid translates this awareness into increased combat power . . . . The CEC sensor grid fuses data from multiple sensors to develop a composite track with engagement quality, creating a level of battlespace awareness that surpasses whatever can be created with stand-alone sensors. The whole clearly is greater than the sum of the parts.
Both NCW and CEC concepts naturally culminate in a single-human-operator cyber-physical CGS tactical SoS, which leverages cutting-edge AI and advanced human partnering concepts to bring about fusion of information originating with the swarm’s multiple sensors and diffusion of control out to the swarm’s multiple platforms, sensors, and weapons. This article outlines a design and development approach for the AI and the advanced human-machine interface (HMI) to prototype such a swarm SoS within the next decade.
ADOPTION OF THE CYBER-PHYSICAL SoS (CPSoS) PARADIGM
In general, a CPSoS may be defined as an SoS ,
where physical and software components are deeply intertwined, each operating on different spatial and temporal scales, exhibiting multiple and distinct behavioral modalities, and interacting with each other in a myriad of ways that change with context.
The cyber-physical CGS SoS in particular is a complex network of software and digital hardware operating in cyberspace, with platforms, sensors, and weapons operating within the physical battlespace environment. (Note that the Internet of Things [IoT] is an instance of CPS that uses the Internet as its communications network. Cyber-physical CGS is an instance of CPS that does not use the Internet. Cyber-physical CGS and IoT are each an instance of a SoS .)
In the CGS case, the networking is wireless, adding additional complexity and interplay with the environment. The modeling and design of a CPSoS attempts to merge the discrete synchronized world of sequential programming with the continuous asynchronous world of physical laws. The differences between and within these two worlds present substantial challenges for the cyber-physical CGS design and verification. Further adding to these inherent challenges is the swarming operation of the CGS, where multiple component systems interact with each other and their human controller.
Table 1 highlights the eight key challenges of cyber-physical CGS SoS design and development. Note that this table (which incorporates concepts and ideas presented by Rajkumar et al. ) excludes the many challenges of design/ development of the cyber or physical components themselves and addresses only the critical overarching SoS issues.
Table 1: Eight Overarching Challenges for Cyber-Physical CGS Design and Development
ARCHITECTURE OF THE CYBER-PHYSICAL CGS SoS
The cyber-physical CGS SoS architecture centers on a population of semiautonomous intelligent agents operating in parallel, neither tightly coupled via a built-in command structure nor completely independent and autonomous. The CGS SoS is neither a rigidly orchestrated system nor an ensemble of statistically independent and autonomous functional entities, and therefore the SoS is an instantiation of what may be termed organized complexity. The regime between a highly structured SoS and an SoS populated by fully autonomous agents is the regime in which complex behavior may emerge. Emergent complex behavior forms the collective intelligence or swarm intelligence of the CGS.
The swarm intelligence of the CGS SoS arises from the distribution of information processing and engagement control across the SoS’s AI agent population, from the use of active machine-learning technologies, and from the human-in-the-loop User-Defined Operating Picture (UDOP) interface that fully enables the human-machine partnership. The UDOP concept (illustrated in Figure 1) extends the Common Operating Picture (COP) such that the human operator is able to supervise the processing, exploitation, and dissemination of information for situation awareness. The UDOP allows rendering and visualization of data analytics services tailored to the operator’s immediate needs for enhanced and efficient command decision-making within the context of the present mission state .
Figure 1: UDOP (Source: NSWC).
The command-guided nature of the swarm, as human-on-the-loop, means the resulting SoS is not completely autonomous but is under the real-time command of a single human swarm tactician-supervisor. The swarm tactician-supervisor functions at a high cognitive and decision-making level, establishing overall SoS mission objectives, providing mission direction, and routinely interjecting mission execution guidance/corrections, while delegating lower-level sensing and control functionalities to the constituent systems of the SoS. The constituent systems of the SoS are intelligent cyber-physical systems composed of multi-sensing and/or multi-control capabilities. Hence, rather than the human operator interfacing with the constituent systems via a fixed peripheral device, we say the operator is infused into the cyber-physical CGS SoS as the high cognitive and decision-making constituent.
A CGS method uses AI agents. In general, an AI agent mines data, processes information, and stores results in a distributed space. At the highest level of abstraction, the AI agents of CGS fall into one of three classes (shown in Figure 2): information fusion, control diffusion, and operator infusion.
Figure 2: CGS SoS Intelligent Agent Types.
The mining and processing of information that originates in the external environment is captured by an abstraction or class denoted <information fusion>. The disaggregation or decomposition or deconstruction of high-level mission objectives that originate with the human operator, coupled with the generation of plans and allocation of tasking out to specific constituent systems of the SoS, is captured by a high-level abstraction denoted <control diffusion>. This theoretical framework leverages control theory’s representational duality between observation and control, which is manifested in CGS by the representational duality between the abstractions of <information fusion> and <control diffusion>. The third high-level abstraction, denoted <operator infusion>, within CGS places a human-on-the-loop for interpreting/assessing processed information/data, establishing mission objectives or making engagement decisions, and interacting within CGS for purposes of machine learning (ML) and fusion/diffusion augmentation/ refinement.
The cyber-physical CGS SoS architecture challenge arises when multiple UxS (with x again equaling space, air, ground, surface, or undersea) comprise different classes of CGS agents and operate in a swarm to accomplish the mission objectives while infused with the UDOP. The CGS SoS architecture defines the swarm operation and its behavior by specifying the information flow and interactions between the CGS agents, between the UxS, and the single human operator. Using the three AI agent classes previously described, Figure 3 illustrates three conceptual architectures for the single human guided cyber-physical CGS SoS derived from the information fusion SoS architecting described in Raz et al. .
Figure 3. Conceptual Cyber-Physical CGS SoS Architectures (Source: NSWC).
Each architecture in Figure 3 represents the capabilities of the individual UxS, their relationship to the human commander, and the information exchange among the UxS. The purpose of this figure is to highlight that a variety of SoS architectures can be conceived by varying the autonomy and information exchange of the cyber-physical systems within the swarm. These architectures differ in multiple dimensions in their construction, operation, and exposed opportunities that the human-commander could exploit from varying allocations of AI agents to the different UxS. Although the design and development of the AI agents that provide the CGS functionality is of significant importance, these architectures introduce a myriad of operational considerations and SoS challenges for fielding the CGS. To provide a timely tactical capability, it is imperative to develop a CGS SoS-level design and analysis capability alongside the development of the individual AI agents.
The objective of CGS SoS design and analysis is to characterize the emerging swarm behavior due to interactions of the AI agents, as well as identify architectures that maximize the CGS advantage under both normal and contested operating conditions. The SoS analysis directly addresses the key challenges for the CGS design and development discussed previously in Table 1 and describes the drivers and the root-causes of the resulting swarm behavior, which are then attributed to the design of AI agents, allocation of AI agents to UxS, and the CGS SoS architecture. Examples of the overarching questions that fall under the CGS SoS design and analysis are:
- Who should determine the critical systems, and what is the appropriate approach to studying the integrity of the swarm?
- What conditions can violate the swarm integrity, and how can those violations be mitigated by AI agent design and/ or dynamic configuration of the cyber-physical CGS SoS architecture?
- To what extent should CPSoS theory be applied to existing systems interacting with the CGS SoS?
- When and how do faults (cyber, physical, functional, malicious intent) propagate through the CGS SoS architecture?
- Why and how do the performances of different CGS SoS architectures vary (i.e., what design features and interactions of the individual AI agents lead to what emergent behavior)?
- How autonomous, robust and resilient are different CGS SoS architectures?
The tactical capabilities enabled by the cyber-physical CGS will depend upon the answers to SoS-level analysis. Nevertheless, at the core of the CGS SoS functionality are the AI agents for information fusion, control diffusion, and operator infusion. The design of these agents using ML and statistical reasoning is described next.
AI AGENTS OF THE <information fusion> CLASS
ML is a type of AI. Learning machines may be roughly categorized into six broad model types, shown in Table 2.
Table 2: Various Types of ML
Symbolic ML, based on first-order logical models, allows for highly expressive representations of possible worlds, is excellent for implementing machine reasoning, and is able to provide the human operator with the steps in its logical reasoning. Symbolic models are the basis of what has been termed good old-fashioned AI (GOFAI). However, a first-order logic knowledge base is brittle in that sentences are either true or false, with no possibility of compromise. When logical systems fail, they do so blatantly or catastrophically. The problem of catastrophic failure has led to what has been termed the “AI Winter,” a period noted for its lack of progress in developing a true artificial intelligence. Another important issue is that learning these models is nontrivial as the search space includes multiple levels of abstraction.
The command-guided nature of the swarm means the resulting SoS is not completely autonomous but is under the real-time command of a single human swarm tactician-supervisor.
On the other hand, probabilistic ML, based on probabilistic graphical models, avoids this brittleness by softly modeling relationships as conditional probability distributions. Yet, while offering robustness not found in logical models, probabilistic models lack the rich representations and reasoning prowess of logic.
ML for the CGS agents of the <information fusion> class is based on a novel hybrid of symbolic and probabilistic ML. This hybrid ML approach combines Bayesian graphical models with first-order logic, which in the AI research community is referred to as statistical relational learning (SRL). Within the SRL approach, logical symbolic representations capture the underlying rich structure of the problem domain, while the probabilistic methods manage the uncertainty and error in the data. There has been immense and real success for these SRL models both from the learning perspective and from the reasoning perspective. State-of-the-art methods inspired from ML have been applied to solve real problems, including natural language understanding, image processing, and biomedical sensing problems.
Recently, the logical reasoning has been replaced with database systems to scale learning to petabytes of data. Bringing in the contextual information from big data analytics, the CGS SRL approach uses probabilistic, symbolic, and contextual information. Recently, the Defense Advanced Research Projects Agency (DARPA) has identified the future of AI in contextual adaptation to explain situations. The CGS SRL approach to information fusion falls within DARPA’s “third wave” in the historical development of AI, illustrated in Figure 4.
Figure 4: DARPA’s Three Waves of AI (Source: NSWC).
AI AGENTS OF THE <control diffusion> CLASS
Dual to the observational information processing side of AI is the planning side of AI that implements the concept of control diffusion. A tactical swarm SoS engages with its environment and by definition is equipped with multiple engagement capabilities: effectors, sensors, and platforms. The swarm SoS must dissect, deconstruct, or decompose its high-level mission objectives into specialized tasking or actions for each of its many engagement capabilities. This disaggregation and deconstruction and decomposition of high-level mission objectives, coordinated with the allocation or diffusion of tasking/actions out to specific engagement capabilities, the constituent systems of the CGS SoS, is captured by the concept of control data diffusion.
Control data diffusion is implemented by enabling an AI to undertake planning. Planning is an AI’s effort to generate a sequence of actions based on observations. At its simplest, AI planning is implemented as a search-based agent. The agent searches the space of all possible action sequences to select the optimal sequence that reaches the goal. To make the search more efficient, a heuristic function that reduces the size of the search space may be computed using various techniques. In many cases, applying a good computed heuristic to the search problem will produce a reasonable estimate of the exact planning solution.
Because the aforementioned planning approach seeks a single linear sequence from start to goal, it is termed total-order planning. A principal disadvantage of total-order planning is its inability to break or decompose the planning problem into separate subproblems. Alternatively, the approach termed partial-order planning does break the problem into subproblems, some of which may be solved in parallel. A partial-order planning solution forms a graph or network of actions as opposed to the linear sequence of actions of total-order planning.
The decomposition idea employed in partial-order planning may be carried further using a hierarchical approach. In hierarchical task network (HTN) planning, the highest-level action in the hierarchy is an overarching description of what is to be accomplished, which at the start of a CGS mission is the set of mission objectives. Via the process of action decomposition, each higher-level action is decomposed into a plan consisting of several lower-level actions, such as decomposition of overall mission objectives into detailed mission plans. The decomposition process continues down the hierarchy to lower levels, such as individual sensor management, and down to the lowest level of primitive actions. These primitive actions are the actuator/ servo control signals transmitted directly to effectors, sensors, and platforms. Hence, the HTN planning process diffuses or fans out the high-level mission objectives to the swarm’s constituent systems, terminating in these lowest-level control signals for individual effectors, sensors, and platforms.
AI AGENTS OF THE <operator infusion> CLASS
Because of the complexity of the cyber-physical CGS SoS, the interfacing for the swarm tactician-supervisor differs from the traditional COP and hand controls. The UDOP interface affords the human operator to reconfigure the interface in real time and throughout mission execution, thereby tailoring his/her information exposure not only to high-level threat summaries and projections but also to instantaneous states of affairs or situations and to individual object states/tracks, depending upon the nature of the mission and the immediate stage of mission execution. In the dual sense, the UDOP enables the human operator to focus his/her decision-making at the highest level of establishing/updating mission objective, or to expand and extend his/her decision-making involvement to include details of instantaneous coordination/ integration among engagement groups within the swarm, or even to decision-making down at the level of individual sensor/weapon/platform management. This rich human operator access to, and interaction with, the entire CGS SoS suggests the human operator is infused into the SoS.
In conjunction with the UDOP paradigm, the operator infusion agents implement recent AI and ML innovations to accomplish true partnering of the human operator with the CGS intelligence. One typically thinks of ML as the processing of preexisting training data during system development. Yet the idea of machine learning may also be applied to accomplish the interaction and partnering between the swarm tactician-supervisor and the CGS SoS. One of the key advantages of a symbolic representation such as first-order logic is the representation of knowledge in a format that facilitates human interaction with the AI. Specifically, this human interaction may include a human advising the CGS SoS throughout mission execution .
The SRL process may be augmented to accept and exploit advice from a human domain expert; thereby infusing the operator into the swarm. This capability may be extended not only to any probabilistic logic learning model for accomplishing information fusion but also to any HTN planning model for accomplishing control diffusion.
Taking this ML approach a bit further, ML may also be accomplished via active advice-seeking by the machine , by which the CGS SoS solicits advice from its human operator throughout mission execution. The upshot is that ML by the swarm, in part, becomes a responsibility of the swarm tactician-supervisor, both in garrison and throughout the execution of missions. In other words, the Warfighter’s role is an advisor and teacher to his/her cyber-physical CGS SoS, and this role is the basis of the human-swarm partnership.
The future of autonomous swarms promises to leverage numerous AI techniques that can help provide situational awareness, require fewer Warfighters, extend mission operations, and respond to ever-changing conditions. As part of this future, probabilistic, symbolic, and contextual information will be used to support a cyber-physical single-human-operator CGS SoS for multidomain operations. And whether it takes three decades or less than one to successfully field technologies such as these, it continues to be critical for the U.S. military to aggressively pursue these technological advancements and maintain its dominance in multidomain warfare.