AI beat a fighter pilot in a simulated dogfight — was it just ‘AI theater’?

An F-15E Strike Eagle painted in the heritage colors of it's P-47 Thunderbolt predecessor takes off from Royal Air Force Lakenheath, England Feb 6. The 48th Fighter Wing officially unveiled the aircraft publicly during a ceremony on Jan 31. (U.S. Air Force photo/Tech. Sgt. Matthew Plew)

Share

Written by

A tournament of artificial intelligence systems pitted against an Air Force fighter pilot ended in a now-predictable outcome: Time after time, the AI agent easily shot down the pilot’s simulated plane.

Despite the dominant showing of the AI systems, experts are pretty evenly split on whether the tests are meaningful to the future of air combat or just for show.

The AlphaDogfight Trials competition, held by the Defense Advanced Research Projects Agency (DARPA) through its Air Combat Evolution program, began by matching the algorithms of eight defense contractors against other AI pilots in Top Gun-like dogfights. The overall winner of the AI-on-AI dogfights was Heron systems, an AI defense contractor with offices in Maryland and Virginia that used deep learning and machine learning to train its system.

And then, in the final stage of the competition, Heron’s AI fighter pilot faced off with a human F-16 pilot using a virtual reality flight simulator, attempting to escape the algorithm on his tail. But the human pilot lost every dogfight, with a final score of 5-0.

The results fall in line with a series of high-profile tests that showcase machine learning’s rapid advancements and, almost always, end with AI winning out against humans — from chess to Jeopardy.

But they also come with caveats in real-world applications that experts have been quick to point out. For instance, Missy Cummings, a director of the Humans and Autonomy Laboratory at Duke University and former Navy pilot, told FedScoop the test was “totally AI theater.”

“I appreciate that the DOD wants to show the world that it is on the cutting edge of AI deployment, but this simply is not it,” Cummings said. She told FedScoop she suspects dogfighting was chosen for the test because it’s cool and also relatively easy to program, a fact admitted by DARPA itself.

Ian McCulloh, chief data scientist for Accenture Federal Services, disagreed that the test was mainly for show, calling the notion that it was AI theater an “oversimplification.” Having another example of a machine outperforming a human will motivate wider applications for AI, he said.

“It is going to spur a range of research that is going to benefit the world,” McCulloh said in an interview. He acknowledged that the application doesn’t prove that algorithms are ready to take over for humans in the cockpit, but pairing machines and humans together could have significant benefits, he said.

The goal of the test was to increase the trust in AI systems by putting it in a commonly understood — or at least commonly heard of — activity for the military.

“Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI,” said Col. Dan “Animal” Javorsek, ACE program manager. “If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.”

Javorsek has previously talked about the need to hold AI systems and humans to similar standards and pair operators and engineers together to improve development. The AlphaDogfight Trials put engineers and operators in the same arena, achieving one of Javorsek’s goals of pairing the two communities in an AI-for-warfare application.

Cummings agreed on the importance of such work. “I always think doing this kind of developmental work is critical as it builds knowledge and capabilities, so it is important that the DOD and related companies keep challenging themselves in similar settings,” she said. “However, I would hope that the effort has much tougher tasks and tests ahead of them, something that is truly DARPA-hard and pushes the envelope.”

ACE program documents show that the next phase of environments DARPA wants to test include striker escorts and enemy fire suppression, operations more difficult than a dogfight.

Another critique from experts on the simulation was that it gave the AI system a full set of information about the environment, which would not be guaranteed in the real world.

Retired Lt. Gen. Jack Shanahan, the former leader of the DOD’s Joint AI Center, did not work on the project but applauded the effort to push forward with AI in warfare in the face of a fast-developing AI programs in China.

“[T]he rationale for rapid movement is made even more compelling by China’s society-wide AI adoption plans,” Shanahan tweeted following the event. However, he cautioned against over-hyping the competition and urged greater state-to-state dialog between the U.S., Russia and China on AI in war.

-In this Story-

Air Force, artificial intelligence (AI), Defense Advanced Research Projects Agency (DARPA), machine learning
TwitterFacebookLinkedInRedditGoogle Gmail