twitter share facebook share 2021-03-22 1746

Long before computers, man was dreaming of intelligent machines that could fight a war. In fact, H. G. Wells’ book, “The War of the Worlds” envisioned such machines, produced by Martians, conquering humans.

Military men still dream of war machines driven by artificial intelligence (AI). However, the dream remains just that – a dream. Even as computer technology improves, scientists discover that the concept of intelligence is more complicated than imagined and that merging such technology with a lethal machine is a recipe for disaster.

In fact, the idea of what artificial intelligence is has evolved over the decades. The first step was defining a set of rules for computers like, if it is raining, bring an umbrella. In a wartime setting, that could mean if someone is wearing an enemy uniform, shoot him.

This type of AI technology is common in tax programs.

The second step was replicating higher order human thinking skills like problem solving. This would be like a drone detecting a person and then using its problem-solving skills to determine if it is an enemy – like the person does not have a uniform, but is in enemy occupied territory, is a young male and is pointing a weapon at you. However, does the response change if it is a young female pointing the weapon?

The third wave will attempt to merge these technologies. However, much remains to be solved. True artificial intelligence requires merging machine learning, symbolic reasoning, statistical learning, search and planning, data, cloud infrastructure, and algorithms. Even with high order computing, scientists find the problems immense.

There is also the question; can a machine replicate the human calculation for waging war – either in something as simple as a drone or as complex as a computer in a headquarters?

Probably not.

Here are some of the problems:

How do we develop the right algorithms and thinking processes for military AI?

The reality is that who we are determines how we think and solve a problem. And that can determine the eventual answer and the likelihood that the answer is right.

One algorithm problem that faced the Navy decades ago was to develop an algorithm that could compute cost of warships before they were built. One algorithm branch took the engineering direction and postulated that the cost of the warship was determined by the cost of all the components – propulsion, radar, electronics, etc. The problem was that most Navy ships include new technology that has not been fully matured during the planning phase, i.e., how do you determine the cost of a new technology propulsion system that only exists on the drawing board?

Another algorithm took an economics path. Rather than determine the cost of each component in the warship, it used the principles of supply and demand – determining cost by how much more efficient the new warship was than the current design.

The economics algorithm was an unpopular answer for the engineers at the Naval Systems Command. They did not understand the thinking behind that solution and opted for the engineering route, even though the economics algorithm produced more accurate data.

The author of the algorithms has an overwhelming impact on the AI. In military AI, how the machine thinks will depend on who designs it. Is the AI developer a software engineer without military experience, a military man with desert fighting experience against militants, a military man with experience in conventional war, a counterinsurgency specialist, or an anti-war activist?

AI also has problems adjusting to the differing behaviors of its opponent. An example can be found in DARPA’s attempt to develop driverless vehicles. Engineers quickly discovered that how drivers react in different parts of the United States impacted how the computer would react.

In the Great Plains like the State of Minnesota, drivers are much more courteous and let other drivers merge easily on busy roads. However, in New York City, where drivers are more aggressive, they do not allow other drivers to merge in heavy traffic.

The result was that a Minnesota programmed automated driving vehicle would be unable to operate in New York City, where a degree of aggressiveness is required.

The same problem occurs in AI military technology. A Russian officer trained in conventional warfare and modern military equipment will react differently than another commander in another country who has poorly trained soldiers and obsolete equipment.

All warfare also requires flexibility. However, can AI rapidly recognize a potential problem that has not occurred yet and rapidly come up with another plan? This is something that makes great generals stand out from their contemporaries.

One example was General Patton’s decision to rotate his army 90 degrees to drive into the German flank at the Battle of the Bulge despite the logistical problems it presented. AI might very well have opted for the conventional solution, which the other Allied generals recommended, but took longer. The result of Patton’s initiative was a quick relief of the town of Bastogne by Patton’s subordinate, General Abrams (for whom the American tank, the M-1 Abrams Tank is named for).

However, a daring AI program can be equally disastrous. In Operation Market Garden in 1944, British General Montgomery tried to use a Patton like strategy to outflank the German defenses. The result was a costly operation that failed in its goal to cross the Rhine and outflank the Siegfried Line.

This brings up another problem with military AI – some types of AI might be better than others in certain situations.

General Erwin Rommel was a master of desert warfare, who ran rings around the British, in North Africa, even though he had a smaller army. Yet, his strategy to defeat the Allies at the invasion of Normandy was criticized by many senior German generals (including Field Marshal von Rundstedt) and was to prove inadequate in the end.

AI, like many generals, also tend to focus on tangibles rather than intangibles. Undoubtedly in May 1940 AI would have looked at British and French tank quality and numbers and forecast that they would have easily defeated the Germans. It would have discounted General Manstein’s plan to strike through the dense Ardennes Forest with obsolete tanks.

Would an American AI overestimate US weapons capability and underestimate the enemies?

Would “expertise” overrule that brilliance that military geniuses have? The operational commander of the two critical naval battles on WWII in the Pacific was Admiral Fletcher, a surface fleet admiral who had no real experience in fighting an aircraft carrier battle. However, Fletcher won both battles and the US Navy was able to claim naval supremacy in the Pacific.

Would AI programmers pick Fletcher’s problem-solving processes over Admiral Halsey, who had experience in aircraft carrier operations and would have commanded the task forces if he had not been ill? Probably not.

Since differing AI algorithms can come up with differing solutions, how would this problem be resolved? How would a General Patton AI interface with an AI that focuses on military logistics? Would one of the AIs be “senior” or would the system try to come up with a compromise – a sort of General Eisenhower AI.

Although the military speaks confidently about AI, they have come no closer to a practical solution than they did 40 years ago.

“We’re in the very early days of a very long history of continued very rapid development in the AI field,” said William Scherlis, director of the Information Innovation Office at the Defense Advanced Research Projects Agency. He was speaking at a virtual panel discussion at the Defense One Genius Machines 2021 summit.

Artificial Intelligence remains a mirage – just on the horizon, but out of reach. Whether the AI is linked to a smart weapon, or a strategic computer that is found at the general’s side, the problems remain too great for anyone to rely upon them. The fate of nations and innocent victims rely too much upon them.

The Art of War is just that – an art. Very few men have mastered it – Napoleon Bonaparte, The Duke of Wellington, George Patton, Erwin Rommel, Gustavus Adolphus, and Thomas “Stonewall” Jackson had the brilliance to win battles and wars. The idea that a lesser man can develop a military artificial intelligence to mimic them remains difficult to believe.

Comments