Steve August 26, 2020
artificial-(un)intelligence-and-the-us-military

Originally posted at TomDispatch.

Just when you thought it couldn’t get any worse, the U.S. military, as
TomDispatch
regular
Michael Klare informs us today, has had a brilliant idea – robot
generals (!) – into which to sink yet more
billions
of our tax dollars as divestment in our infrastructure,
schools,
health care, and the like only continues in the age of Trump and the midst of
a grim pandemic. Of course, given American generalship in the “forever
wars
” of the twenty-first century, who doesn’t feel that almost
anyone or anything could have done better? Still, to turn the potential destruction
of the planet itself (via nuclear arms) over to computers? I mean, what could
possibly go wrong?

Well, actually, let Klare fill you in on just what could prove to be a Terminator
moment for humanity. ~ Tom


Robot Generals: Will They Make Better Decisions Than
Humans – Or Worse?

By Michael T. Klare

With Covid-19 incapacitating startling
numbers
of US service members and modern weapons proving increasingly lethal,
the American military is relying
ever more frequently on intelligent robots to conduct hazardous combat operations.
Such devices, known in the military as “autonomous
weapons systems
,” include robotic sentries, battlefield-surveillance
drones, and autonomous submarines. So far, in other words, robotic devices are
merely replacing standard weaponry on conventional battlefields. Now, however,
in a giant leap of faith, the Pentagon is seeking to take this process to an
entirely new level – by replacing not just ordinary soldiers and their weapons,
but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon
is now rushing their future deployment as a matter of national urgency. Every
component of a modern general staff – including battle planning, intelligence-gathering,
logistics, communications, and decision-making – is, according to the Pentagon’s
latest plans, to be turned over to complex arrangements of sensors, computers,
and software. All these will then be integrated into a “system of systems,”
now dubbed the Joint All-Domain
Command-and-Control
, or JADC2 (since acronyms remain the essence of military
life). Eventually, that amalgam of systems may indeed assume most of the functions
currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course,
an entirely new one. It has, in truth, been a long time coming. During the
Cold War, following the introduction of intercontinental ballistic missiles
(ICBMs) with extremely short flight times, both military strategists and science-fiction
writers began to imagine mechanical systems that would control such nuclear
weaponry in the event of human incapacity.

In Stanley Kubrick’s satiric 1964 movie Dr.
Strangelove
, for example, the fictional Russian leader Dimitri Kissov
reveals that the Soviet Union has installed a “doomsday machine”
capable of obliterating all human life that would detonate automatically should
the country come under attack by American nuclear forces. Efforts by crazed
anti-Soviet US Air Force officers to provoke a war with Moscow then succeed
in triggering that machine and so bring about human annihilation. In reality,
fearing that they might experience a surprise attack of just this sort, the
Soviets later did install a semi-automatic retaliatory system they dubbed
“Perimeter,” designed to launch Soviet ICBMs in the event that sensors
detected nuclear explosions and all communications from Moscow had been silenced.
Some analysts believe that an upgraded version of Perimeter is still in operation,
leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film
WarGames, starring
Matthew Broderick as a teenage hacker, portrayed a supercomputer called the
War Operations Plan Response, or WOPR (pronounced “whopper”) installed
at the North American Aerospace Command (NORAD)
headquarters in Colorado. When the Broderick character hacks into it and starts
playing what he believes is a game called “World War III,” the computer
concludes an actual Soviet attack is underway and launches a nuclear retaliatory
response. Although fictitious, the movie accurately depicts many aspects of
the US nuclear command-control-and-communications (NC3) system, which was then
and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today’s
standards, being capable solely of determining that a nuclear attack was under
way and ordering a catastrophic response. Now, as a result of vast improvements
in artificial
intelligence
(AI) and machine learning, machines can collect and assess
massive amounts of sensor data, swiftly detect key trends and patterns, and
potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels
is becoming essential, US strategists argue, because an exponential growth in
sensor information combined with the increasing speed of warfare is
making it nearly impossible for humans to keep track of crucial battlefield
developments. If future scenarios prove accurate, battles that once unfolded
over days or weeks could transpire in the space of hours, or even minutes, while
battlefield information will be pouring in as multitudinous data points, overwhelming
staff officers. Only advanced computers, it is claimed, could process so much
information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form
of combat, but especially to the most terrifying of them all, nuclear war.
When ICBMs were the principal means of such combat, decisionmakers had up
to 30 minutes between the time a missile was launched and the moment of detonation
in which to determine whether a potential attack was real or merely a false
satellite reading (as did sometimes
occur
during the Cold War). Now, that may not sound like much time, but
with the recent introduction of hypersonic
missiles
, such assessment times could shrink to as little as five minutes.
Under such circumstances, it’s a lot to expect even the most alert decision-makers
to reach an informed judgment on the nature of a potential attack. Hence the
appeal (to some) of automated decision-making systems.

“Attack-time compression has placed America’s senior leadership
in a situation where the existing NC3 system may not act rapidly enough,”
military analysts Adam Lowther and Curtis McGiffin argued
at War on the Rocks, a security-oriented website. “Thus, it
may be necessary to develop a system based on artificial intelligence, with
predetermined response decisions, that detects, decides, and directs strategic
forces with such speed that the attack-time compression challenge does not
place the United States in an impossible position.”

This notion, that an artificial intelligence-powered device – in essence,
a more intelligent version of the doomsday machine or the WOPR – should be
empowered to assess enemy behavior and then, on the basis of “predetermined
response options,” decide humanity’s fate, has naturally produced
some unease
in the community of military analysts (as it should for the rest of us as
well). Nevertheless, American strategists continue to argue that battlefield
assessment and decision-making – for both conventional and nuclear warfare
– should increasingly be delegated to machines.

“AI-powered intelligence systems may provide the ability to integrate
and sort through large troves of data from different sources and geographic
locations to identify patterns and highlight useful information,” the
Congressional Research Service noted
in a November 2019 summary of Pentagon thinking. “As the complexity
of AI systems matures,” it added, “AI algorithms may also be capable
of providing commanders with a menu of viable courses of action based on real-time
analysis of the battlespace, in turn enabling faster adaptation to complex
events.”

The key wording there is “a menu of viable courses of action based
on real-time analysis of the battlespace.” This might leave the impression
that human generals and admirals (not to speak of their commander-in-chief)
will still be making the ultimate life-and-death decisions for both their
own forces and the planet. Given such anticipated attack-time compression
in future high-intensity combat with China and/or Russia, however, humans
may no longer have the time or ability to analyze the battlespace themselves
and so will come to rely on AI algorithms for such assessments. As a result,
human commanders may simply find themselves endorsing decisions made by machines
– and so, in the end, become superfluous.

Creating Robot Generals

Despite whatever misgivings they may have about their future job security,
America’s top generals are moving swiftly to develop and deploy that
JADC2 automated command mechanism. Overseen by the Air Force, it’s proving
to be a computer-driven
amalgam
of devices for collecting real-time intelligence on enemy forces
from vast numbers of sensor devices (satellites, ground radars, electronic
listening posts, and so on), processing that data into actionable combat information,
and providing precise attack instructions to every combat unit and weapons
system engaged in a conflict – whether belonging to the Army, Navy, Air Force,
Marine Corps, or the newly formed Space Force and Cyber Command.

What, exactly, the JADC2 will consist of is not widely known, partly because
many of its component systems are still shrouded in secrecy and partly because
much of the essential technology is still in the development stage. Delegated
with responsibility for overseeing the project, the Air Force is working with
Lockheed
Martin
and other large defense contractors to design and develop key elements
of the system.

One such building block is its Advanced Battle Management System (ABMS), a
data-collection and distribution system intended
to provide fighter pilots with up-to-the-minute data on enemy positions and
help guide their combat moves. Another key
component
is the Army’s Integrated Air and Missile Defense Battle
Command System (IBCS), designed to connect radar systems to anti-aircraft and
missile-defense launchers and provide them with precise firing instructions.
Over time, the Air Force and its multiple contractors will seek to integrate
ABMs and IBCS into a giant network of systems connecting every sensor, shooter,
and commander in the country’s armed forces – a military “internet
of things
,” as some have put it.

To test this concept and provide an example of how it might operate in the
future, the Army conducted a live-fire
artillery exercise
this August in Germany using components (or facsimiles)
of the future JADC2 system. In the first stage of the test, satellite images
of (presumed) Russian troop positions were sent to an Army ground terminal,
where an AI software program called Prometheus combed through the data to
select enemy targets. Next, another AI program called SHOT computed the optimal
match of available Army weaponry to those intended targets and sent this information,
along with precise firing coordinates, to the Army’s Advanced Field
Artillery Tactical Data System (AFATDS)
for immediate action, where human commanders could choose to implement it
or not. In the exercise, those human commanders had the mental space to give
the matter a moment’s thought; in a shooting war, they might just leave
everything to the machines, as the system’s designers clearly intend
them to do.

In the future, the Army is planning even more ambitious tests of this evolving
technology under an initiative called Project
Convergence
. From what’s been said publicly about it, Convergence
will undertake ever more complex exercises involving satellites, Air Force fighters
equipped with the ABMs system, Army helicopters, drones, artillery pieces, and
tactical vehicles. Eventually, all of this will form the underlying “architecture”
of the JADC2, linking every military sensor system to every combat unit and
weapons system – leaving the generals with little to do but sit by and watch.

Why Robot Generals Could Get It Wrong

Given the complexity of modern warfare and the challenge of time compression
in future combat, the urge of American strategists to replace human commanders
with robotic ones is certainly understandable. Robot generals and admirals
might theoretically be able to process staggering amounts of information in
brief periods of time, while keeping track of both friendly and enemy forces
and devising optimal ways to counter enemy moves on a future battlefield.
But there are many good reasons to doubt the reliability of robot decision-makers
and the wisdom of using them in place of human officers.

To begin with, many of these technologies are still in their infancy, and
almost all are prone to malfunctions
that can neither be easily anticipated nor understood. And don’t forget
that even advanced algorithms can be fooled, or “spoofed,” by
skilled professionals.

In addition, unlike humans, AI-enabled decision-making systems will lack
an ability to assess intent or context. Does a sudden enemy troop deployment,
for example, indicate an imminent attack, a bluff, or just a normal rotation
of forces? Human analysts can use their understanding of the current political
moment and the actors involved to help guide their assessment of the situation.
Machines lack that ability and may assume the worst, initiating military action
that could have been avoided.

Such a problem will only be compounded by the “training” such
decision-making algorithms will undergo as they are adapted to military situations.
Just as facial recognition software has proved to be tainted
by an over-reliance on images of white males in the training process – making
them less adept at recognizing, say, African-American women – military decision-making
algorithms are likely to be distorted
by an over-reliance on the combat-oriented scenarios selected by American
military professionals for training purposes. “Worst-case
thinking
” is a natural inclination of such officers – after all,
who wants to be caught unprepared for a possible enemy surprise attack? –
and such biases will undoubtedly become part of the “menus of viable
courses of action” provided by decision-making robots.

Once integrated into decision-making algorithms, such biases could, in turn,
prove exceedingly dangerous in any future encounters between US and Russian
troops in Europe or American and Chinese forces in Asia. A clash of this sort
might, after all, arise at any time, thanks to some misunderstanding or local
incident that rapidly gains momentum – a sudden clash between US and Chinese
warships off Taiwan, for example, or between American and Russian patrols in
one of the Baltic states. Neither side may have intended to ignite a full-scale
conflict and leaders on both sides might normally move to negotiate a cease-fire.
But remember, these will no longer simply be human conflicts. In the wake of
such an incident, the JADC2 could detect some enemy move that it determines
poses an imminent risk to allied forces and so immediately launch an all-out
attack by American planes, missiles, and artillery, escalating the conflict
and foreclosing any chance of an early negotiated settlement.

Such prospects become truly frightening when what’s at stake is the onset
of nuclear war. It’s hard to imagine any conflict among the major powers
starting out as a nuclear war, but it’s far easier to envision a scenario
in which the great powers – after having become embroiled in a conventional
conflict – reach a point where one side or the other considers the use of atomic
arms to stave off defeat. American military doctrine, in fact, has always
held out
the possibility of using so-called tactical nuclear weapons in
response to a massive Soviet (now Russian) assault in Europe. Russian military
doctrine, it is widely assumed,
incorporates similar options. Under such circumstances, a future JADC2 could
misinterpret enemy moves as signaling preparation for a nuclear launch and order
a pre-emptive strike by US nuclear forces, thereby igniting World War III.

War is a nasty, brutal activity and, given almost two decades of failed conflicts
that have gone under the label of “the war on terror,” causing
thousands of American casualties (both physical and mental), it’s easy
to understand why robot enthusiasts are so eager to see another kind of mentality
take over American war-making. As a start, they contend, especially in a pandemic
world
, that it’s only humane to replace human soldiers on the battlefield
with robots and so diminish human casualties (at least among combatants).
This claim does not, of course, address the argument
that robot soldiers and drone aircraft lack the ability to distinguish between
combatants and non-combatants on the battlefield and so cannot be trusted
to comply with the laws of war or international humanitarian law – which,
at least theoretically, protect civilians from unnecessary harm – and so
should be banned.

Fraught as all of that may be on future battlefields, replacing generals
and admirals with robots is another matter altogether. Not only do legal and
moral arguments arise with a vengeance, as the survival of major civilian
populations could be put at risk by computer-derived combat decisions, but
there’s no guarantee that American GIs would suffer fewer casualties
in the battles that ensued. Maybe it’s time, then, for Congress to ask
some tough questions about the advisability of automating combat decision-making
before this country pours billions of additional taxpayer dollars into an
enterprise that could, in fact, lead to the end of the world as we know it.
Maybe it’s time as well for the leaders of China, Russia, and this country
to limit or ban the deployment of hypersonic missiles and other weaponry that
will compress life-and-death decisions for humanity into just a few minutes,
thereby justifying the automation of such fateful judgments.

Michael T. Klare, a TomDispatch
regular
, is the five-college professor emeritus of peace and
world security studies at Hampshire College and a senior visiting fellow at
the Arms Control Association. He is the author of 15 books, the latest of
which is
All
Hell Breaking Loose: The Pentagon’s Perspective on Climate Change
.

Follow TomDispatch on Twitter
and join us on Facebook.
Check out the newest Dispatch Books, John Feffer’s new dystopian novel
(the second in the
Splinterlands series) Frostlands,
Beverly Gologorsky’s novel
Every
Body Has a Story
, and Tom Engelhardt’s A
Nation Unmade by War
, as well as Alfred McCoy’s In
the Shadows of the American Century: The Rise and Decline of U.S. Global Power

and John Dower’s
The
Violent American Century: War and Terror Since World War II
.

Copyright 2020 Michael T. Klare

Read More