The dust lay thick upon the ruins of bombed-out buildings. Small groups of soldiers, leaden with their cargo of weaponry, bent low and scurried like beetles between the wrecked pillars and remains of shops and houses.
Intelligence had indicated that enemy troops were planning a counterattack, but so far, all was quiet across the heat-shimmered landscape. The allied soldiers gazed intently out at the far hills and closed their weary, dust-caked eyes against the glare coming off the sand.
Suddenly, the men were aware of a low humming sound, like thousands of angry bees, coming from the northeast. Growing louder, this sound was felt, more than heard, and the buzzing was intensifying with each passing second. The men looked up as a dark, undulating cloud approached, and found a swarm of hundreds of drones, dropped from a distant unmanned aircraft, heading to their precise location in a well-coordinated group, each turn and dip a nuanced dance in close collaboration with their nearest neighbors.
Although it seems like a scene from a science fiction movie, the technology already exists to create weapons that can attack targets without human intervention. The prevalence of this technology is pervasive and artificial intelligence as a transformational technology shows virtually unlimited potential across a broad spectrum of industries.
In health care, for instance, robot-assisted surgery allows doctors to perform complex procedures with fewer complications than surgeons operating alone, and AI-driven technologies show great promise in aiding clinical diagnosis and automating workflow and administrative tasks, with the benefit of potentially saving billions in health care dollars.
In a different area, we are all aware of the emergence of autonomous vehicles and the steady march toward driverless cars being a ubiquitous sight on U.S. roadways. We trust that all this technology will be safe and ultimately in the best interest of the public.
Warfare, however, is a different animal.
In his new book, Army of None, Paul Scharre asks, “Should machines be allowed to make life-and-death decisions in war? Should it be legal? Is it right?” It is with these questions and others in mind, and in light of the advancing AI arms race with Russia and China that the Pentagon has announced the creation of the Joint Artificial Intelligence Center, which will have oversight of most of the AI efforts of U.S. service and defense agencies. The timeliness of this venture cannot be underestimated; automated warfare has become a “not if, but when” scenario.
In the fictional account above, it is the enemy combatant that, in a “strategic surprise,” uses advanced AI-enabled autonomous robots to attack U.S. troops and their allies. Only a few years ago, we may have dismissed such a scenario — an enemy of the U.S. having more and better advanced technology for use in the battlefield — as utterly unrealistic.
Today, however, few would question such a possibility. Technology development is global and accelerating worldwide. China, for example, has announced that it will overtake the United States within a few years and will dominate the global AI market by 2030. Given the pace and scale of investment the Chinese government is making in this and other advanced technology spaces such as quantum information systems, such a scenario is patently feasible.
Here, the Defense Department has focused much of its effort courting Silicon Valley to accelerate the transition of cutting-edge AI into the warfighting domain. While it is important for the Pentagon to cultivate this exchange and encourage nontraditional businesses to help the military solve its most vexing problems, there is a role uniquely suited for universities in this evolving landscape of arming decision makers with new levels of AI.
Universities like Purdue attribute much of their success in scientific advancement to the open, collaborative environment that enables research and discovery. As the Joint Artificial Intelligence Center experiments with and implements new AI solutions, it must have a trusted partner. It needs a collaborator with the mission of verifying and validating trustable and explainable AI algorithms, and with an interest in cultivating a future workforce capable of employing and maintaining these new technologies, in the absence of a profit motive.