The Warfare Renaissance AI-Based Human–Machine Teaming

The 1860s marked a period of Renaissance in firearms development. Muzzle loading rifles and revolvers transitioned to cartridge-firing, breech loading weapons allowing the user to fire and reload 3 to 4 times faster than their muzzle loading counterparts. Gatling debuted the first machine gun by hand cranking multiple rotating barrels, while Reffye debuted the mitrailleuse volley gun in Europe. Colt engineers, Richards and Mason, converted Colt’s 1851 and 1860 cap and ball revolvers to fire metallic center-fire cartridges, leading to the famous Colt Model P, while Sharps drastically shortened the barrels on his infantry model to 22 inches for Calvary use and later converted those rifles firing paper cartridges to firing metallic cartridges. Henry debuted the lever action, self-loading rifle that provided its users a decisive rapid-fire battlefield advantage. The United States was embroiled in its bloody Civil War, while the Europeans held the First Geneva Convention for the Amelioration of the Condition of the Wounded in Armies in the Field. Adopting the treaty in 1864, it became the first of four treaties of the Geneva Conventions. This in turn, established the Red Cross and the international rule of war. So—why is this important?

 

The short answer is that the Geneva Conventions could not have imagined robotic warfare with AI-controlled, autonomous machines (warbots) killing human combatants or machines killing machines. Therefore, the rules of war as established by the first of four Geneva Conventions back in the 1860s no longer provide suitable guidelines to control robotic warfare. Where does that leave us?

 

Today the United States, Great Britain, China and Russia personify some of the 40-plus countries developing a new generation of fully autonomous robotic weapons (warbots).  These warbots operate using Artificial Intelligence (AI) that can be programmed to autonomously seek out and destroy enemy targets without human interaction, intervention or control. Some military proponents argue AI-controlled warbots will provide pinpoint accuracy, and this, in turn, will avoid collateral damage and civilian casualties. Others moralize upon the life-saving qualities in warbot employment, because warbot soldiers will keep human soldiers out of harm’s way. Opponents of this emerging warbot technology fear that it will prompt a high technology AI-based arms race resulting in a New World Order that uses force (absent of Geneva Conventions guidelines) before diplomacy. Much as the 1860s were to firearms advances, we are now in the Renaissance period of Warbots.

 

The New Teams

The most likely scenario will be the development of warbot teams composed of human soldiers working in conjunction with weaponized autonomous warbots that possess specialized capabilities. The current buzzword phrase for teams that combine humans and warbots working side-by-side is “human–machine teaming.”  This relationship might best be understood by examining the relationship between “R2-D2” and Luke Skywalker in the science fiction movie “Star Wars.” The man–machine partnership depicts R2-D2 as a fully autonomous robot who knows when and how to save people from desperate situations by using his artificial intelligence (AI). R2-D2 is a “good” robot.

But Hollywood screenwriters and novelists more often embellish the “bad” robot scenario as in the movie “I-Robot.” Critics warn of an AI apocalypse scenario where AI becomes so autonomous, powerful and out of control that it threatens the existence of mankind. Regardless of the scenario, its reality all boils down to the level of algorithm sophistication and computing power that can be realistically achieved, which in turn affords AI the ability to think, understand and learn—to be intelligent.

 

On February 12, 2019, the Pentagon publically released (for the first time) its master plan for speeding incorporation of AI into advanced battlefield-related technologies.  This master plan for military AI spending spans the gambit of defense functions that include: weapon development, interoperability, sustainment, operations, force protection, training, healthcare and even recruiting.

 

General James Mattis, the Trump Administration’s Secretary of Defense during 2017 and 2018, repeatedly stated his primary goal was to make the U.S. military “more lethal.” This goal included the development of AI-based advanced weaponry and human–machine teaming.  Clearly, General Mattis was attempting to provide U.S. forces the technological edge beyond the battlefield—in this case, think battlespace.

 

However, opposition groups like the Campaign to Stop Killer Robots, which expounds a conflicting understanding of the evolving world order, have been gaining public support by insisting upon an arms control ban for autonomous weapon technologies (arms control only works if everyone plays by the rules). A January 2019 poll they sponsored reported that 52 percent of Americans opposed the idea of AI-driven autonomous armed weapons systems. Whether those polled possessed an accurate technical understanding of AI, or a skewed Hollywood fictional version (or some of both), was undetermined by the poll. Hollywood movie scenarios about AI are usually scary, so the trickle down reasoning is, AI is scary—and scary is bad. Therefore, AI must be bad.

 

In the Mattis context, maintaining the warfighting technical advantage over our competitors, peers and potential enemies is all about winning in new-generation warfare. And, we should take whatever preparedness measures necessary to ensure we win, i.e., a more lethal military. And that simply translates to the development of AI-based advanced weaponry and human–machine teaming.

 

Where We Stand

Where are we on the AI warbot developmental scale compared to our potential enemies? Russia and China are investing heavily in AI development across a broad base of applications, and they are showing credible results in numerous areas from hypersonic weapons to their space programs. A 2017 report released by China’s State Council pointed China to become the AI global leader by 2030. Chinese AI developmental strategy includes broad AI applications that extend through their domestic industry with a target worth $150 billion annually. Some high-ranking members of the U.S. House and Senate have publically stated concerns that we are falling behind. While that claim may be arguable, the bottom-line question is, how long will it take us to develop and field formidable AI-driven warbot teams that can win in new generation warfare?  When it comes to winning, it is not about keeping up with our competitors; it’s about doing what it takes to stay ahead of them.

 

The Pentagon has also recently published a new artificial intelligence strategy that reveals the U.S. military’s shift away from “heavy-metal hardware” like ships, tanks, planes, etc., to a world where AI makes the difference between winning and losing. In concert with this strategy, a Joint Artificial Intelligence Center, or JAIC, has been established. The JIAC will work with concerned military departments, the Uniformed Services, as well as government and non-government entities, to leverage enterprise cloud adoption and shepherd development and execution of new AI mission initiatives. “The JAIC will work closely with individual components [of the Defense Department] to help identify, shape and accelerate their component-specific AI deployments, called ‘Component Mission Initiatives’ or ‘CMIs.’” Remember the term CMI because you’ll hear it a lot in the coming years.

 

In addition to the JAIC, the strategy names two additional organizations residing within the DoD. One is the Defense Advanced Research Projects Agency, or DARPA, which has a 6.1 budget function. DARPA’s claim to fame is pursuing fundamental technology breakthroughs by conducting high-risk, high-reward technology research (the 6.1 function, by definition) for revolutionizing the future. DARPA then passes its successes off to other agencies for further specific DoD development and/or other applications.

 

The other organization mentioned is the Defense Innovation Unit, DIU (previously abbreviated, DIUx, with the “x” standing for experimental). DIU consists of a small, lean and technically mean staff, whose purpose is to bring Silicon Valley innovation to the armed forces. At only three years old, DIU has enjoyed genuine success and demonstrated its value added strategies.

 

While the U.S. Special Operations Command’s (SOCOM) is not specifically named, the strategy suggests that any technological holes stemming from the developmental shift toward AI-empowered human–machine teaming will be filled by SOCOM. But why SOCOM?

 

The answer is simple. SOCOM possesses a unique tried and proven combination of meeting urgent operational needs, relative lack of bureaucracy, special acquisition authorities and a less risk adverse institutional culture than the mainstream military. Secondly, SOCOM’s Science and Technology (S&T) department is superbly proficient at “adaptive engineering.” This means they take off-the-shelf technology and combine it with other cutting edge technologies to create wholly new capabilities, which none of the contributing technologies individually possesses. SOCOM has a remarkable track record of capabilities achievement doing this type of cost-effective rapid development. SOCOM is no doubt already playing a key role in developing AI advanced weaponry and human–machine teaming toward conducting special operations in high threat and non-human permissive (to include contaminated and outer space) environments.

 

Critics of SOCOM’s S&T capabilities being used in this manner say SOCOM’s success record is based on “small stuff” like up-gunning aircraft, applying stealth and mechanical modifications to things like boats and planes, creating specialized off-road vehicles and mini-drones, even sophisticated cyber operations aimed at locating, identifying and neutralizing specific targets and so on. While these are small things when compared to big things like the Ballistic Missile Defense Program, or the Air Mobility Program, small things count largely when you’re operating in the ambiguous “grey zone” of proxy war, where direct-action surgical strikes and deniable cyber attacks populate the multi-domain warfare areas that SOCOM calls home. In this environment, it makes sense that SOCOM’s S&T nimbleness can be relied upon to fill any gaps in AI-empowered human–machine teaming that the mainstream can’t.

 

The Human Element

For all that is good, bad or ugly about the development of human–machine teaming, it is the human element that must remain central to the partnership and remain in ultimate control—the AI failsafe, if you will. Hollywood’s fictional human–machine hybrid, or cybernetic human, may not be too distant from reality. Regardless, as part of the human-centered adoption of human–machine teaming, a level of trust will need to develop commensurate with the technology. However, even between humans, trust is not finite nor should it be.

 

At the most basic level, human civilization is based on trust, and language is how it is communicated. Trust is especially paramount in combat, where seconds determine life and death actions. As battlefield warbot presence becomes increasingly commonplace, so must its ability to communicate with its human counterparts.

 

In recognition of the necessity for warbots to quickly gain their human counterparts’ trust, DARPA has launched an ambitious program to accomplish precisely that.

Officially named “Competency-Aware Machine Learning,” the DARPA goal is to develop machine–learning systems that continuously assess their own performance in time-critical, dynamic situations, and communicate that information to human team-members in an easily understood format. Further, just as humans learn to anticipate each other’s behavior to particular events, so must AI “learn” to anticipate individual human behavior to particular circumstantial events and vice versa.

 

Achieving this seemingly simple goal requires a suite of sophisticated sensors and the ability for AI to define its situational awareness. This real-time data stream must simultaneously be converted into language and actions that can be anticipated (and trusted) by humans – in human terms. In conjunction, the machine-learning program must also master object recognition, navigation, action planning, and decision-making, and each of these elements must have adequate programmed limitations to ensure both control and human understandability.

 

In the near future, the U.S. military will rapidly transition to AI-based human–machine teaming, and that will lead the way for similar development throughout the U.S. defense industry and its civil sector subsets, as a whole. The requirements of future warfare are already driving much of today’s human–machine technology development, and those advances will be applied across mankind’s every venture from farming, to medicine, to manufacturing, to space exploration. Human–machine teaming in everyday life will become as commonplace during the coming decade as PCs, self-checkout, online shopping and smart phones are today. You will either choose to be a team member and embrace AI, or you will be marginalized by default. A non-decision is still a decision.

 

Warbots will come in all forms and all sizes depending upon their purpose. Some may resemble something out of a science fiction movie; others may appear life-like.

Credit: Boston Dynamics

 

Launched from a B-52’s wing pylon, the U.S. Air Force’s experimental X-51A Waverider is an unmanned, autonomous, scramjet-powered hypersonic (five times the speed of sound, or faster) technology demonstrator. Its nearly wingless 25-foot-long body and shark nose are aerodynamically designed to ride its own shockwave. The X-51A is envisioned to carry a payload array of capabilities to include reconnaissance and surveillance, electronic warfare and cyber countermeasures and even the capability to launch orbital devices like cube satellites.

Credit: Boeing

 

The Echo Voyager technology demonstrator (shown here) preceded Orca. Boeing was recently awarded a 43 million dollar contract to build four Extra-Large Unmanned Underwater Vehicles (XLUUVs) for the U.S. Navy. These giant drone subs, nicknamed Orcas, will undertake long-range autonomous missions that include intelligence collection, electronic warfare, mine laying, countermining, torpedoing surface ships and submarines and clandestine support of special operations missions. Measuring in at 51×8.5×8.5 feet, with a 50-ton displacement, this unmanned autonomous diesel-electric submarine can be launched and recovered from a pier. Of course, an amphibious ship’s well deck would likely work too, as might a submarine’s top deck carry (attached behind the sail) similar to the SEAL’s Dry Deck Shelter (DDS). If Orca possesses the capabilities of it’s prototype predecessor, the Echo Voyager, it will dive to 11,000 feet with a range of 6,500 nautical miles and run stealthily at a submerged speed of 8 knots.

Credit: Boeing

 

 

The U.S. Navy’s Sea Hunter represents an entirely new class of unmanned autonomously operated ocean-going vessel. Developed under the Defense Advanced Research Projects Agency (DARPA)’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program, in conjunction with the Office of Naval Research (ONR), its capabilities consist of several modular “plug-and-play” payloads that include anti-submarine, countermining, reconnaissance and surveillance (R&S) and electronic warfare capabilities. Measuring 130-feet long with a range of several thousand miles, the autonomous vessel can sustain continuous operations at sea for months without care and feeding.

Credit: U.S. Navy photo by John F. Williams/Released

 

 

Using autonomous guidance and terrain navigation, Boston Dynamics’ LS3 “pack mule” can autonomously follow its human foot soldier leader or travel on its own to a designated location using onboard terrain sensing, obstacle avoidance and GPS. It responds to basic voice commands like “sit,” “stay” and “follow.” LS3 carries 182kg of gear and enough fuel for a 32km mission lasting 24 hours. Its negatives are that it’s noisy and vulnerable to small arms fire.

Credit: Boston Dynamics

 

 

Standing 6 feet tall and human-like in movement, Boston Dynamics’ Atlas exemplifies an autonomously controlled, advanced, bipedal humanoid robot. Its autonomous whole-body mobile manipulation system coordinates human-like motions of the arms, torso and legs. Atlas employs multiple body and legs sensors to balance and LIDAR and stereovision sensors in its head to avoid obstacles and map terrain for navigation. The advantage of bipedal locomotion (walking upright) is its compact footprint. Atlas is to humanoid robots as the “Model T” was to automobiles.

Credit: Boston Dynamics