Artificial intelligence and war – Mind control

By Matt Dallisson, 04/10/2019

THE CONTEST between China and America, the world’s two superpowers, has many dimensions, from skirmishes over steel quotas to squabbles over student visas. One of the most alarming and least understood is the race towards artificial-intelligence-enabled warfare. Both countries are investing large sums in militarised artificial intelligence (AI), from autonomous robots to software that gives generals rapid tactical advice in the heat of battle. China frets that America has an edge thanks to the breakthroughs of Western companies, such as their successes in sophisticated strategy games. America fears that China’s autocrats have free access to copious data and can enlist local tech firms on national service. Neither side wants to fall behind. As Jack Shanahan, a general who is the Pentagon’s point man for AI, put it last month, “What I don’t want to see is a future where our potential adversaries have a fully AI-enabled force and we do not.”

AI-enabled weapons may offer superhuman speed and precision (see article). But they also have the potential to upset the balance of power. In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisions but also to give orders. That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, AI systems can be hacked, and tricked with manipulated data.

Deterrence rested on the consensus that if nuclear bombs were used, they would pose catastrophic risks to both sides. But the threat posed by AI is less lurid and less clear. It might aid surprise attacks or confound them, and the death toll could range from none to millions. Likewise, cold-war arms-control rested on transparency, the ability to know with some confidence what the other side was up to. Unlike missile silos, software cannot be spied on from satellites. And whereas warheads can be inspected by enemies without reducing their potency, showing the outside world an algorithm could compromise its effectiveness. The incentive may be for both sides to mislead the other. “Adversaries’ ignorance of AI-developed configurations will become a strategic advantage,” suggests Henry Kissinger, who led America’s cold-war arms-control efforts with the Soviet Union.

That leaves the last control—safety. Nuclear arsenals involve complex systems in which the risk of accidents is high. Protocols have been developed to ensure weapons cannot be used without authorisation, such as fail-safe mechanisms that mean bombs do not detonate if they are dropped prematurely. More thinking is required on how analogous measures might apply to AI systems, particularly those entrusted with orchestrating military forces across a chaotic and foggy battlefield.

The principles that these rules must embody are straightforward. AI will have to reflect human values, such as fairness, and be resilient to attempts to fool it. Crucially, to be safe, AI weapons will have to be as explainable as possible so that humans can understand how they take decisions. Many Western companies developing AI for commercial purposes, including self-driving cars and facial-recognition software, are already testing their AI systems to ensure that they exhibit some of these characteristics. The stakes are higher in the military sphere, where deception is routine and the pace is frenzied. Amid a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. So far there is little sign that the dangers have been taken seriously enough—although the Pentagon’s AI centre is hiring an ethicist. Leaving warfare to computers will make the world a more dangerous place.

This content was originally published here.