Advertisement

DOD must pay more attention to building people’s trust in AI, researchers say

The Pentagon plans to use AI to assist on the battlefield, not to replace humans entirely. But to successfully pair human and machine, the military needs to understand trust, CSET says.
Army robot
U.S. Army paratroopers assigned to Bravo Company, 54th Brigade Engineer Battalion, 173rd Airborne Brigade prepare the Dragon Runner 10 robot for operation in Grafenwoehr Training Area during the 2019 Saber Junction exercise in Germany. (DOD / Spc. Ryan Lucas)

The Department of Defense sees artificial intelligence ultimately as a troop-helping technology, but so far it appears to lack research on how to build humans’ trust in the machines that might make it to the battlefield, a new report says.

There are a startlingly small number of research programs dedicated to studying how to improve trust in AI-enabled machines, given the military’s growing interest in them, according to the research from the Center for Security and Emerging Technology at Georgetown University. The problem is that while the DOD focuses on building technology that can pair up with humans — such as autonomous vehicles or algorithms that assist in decision making — it’s not gaining insight into how that pairing will actually work, the report says.

“There really is a consensus within military circles that trust is important to the relationship,” Margarita Konaev, the lead author on the project, told FedScoop in an interview. “It is something that we were expecting to see, but it really was not something that we found.”

The research plumbed through the available descriptions in DOD’s vast portfolio of science and technology research, which includes work by agencies dedicated to emerging technology, like the Defense Advanced Research Projects Agency (DARPA), as well as offices within each military service, like the Army’s AI Task Force. There also is the work of the Pentagon’s Joint AI Center (JAIC), which serves as a fusion and fielding office for AI technology.

Advertisement

The study hardly had a comprehensive view of the military’s research on trust and AI, considering that much of it could be classified. But it is highly unlikely that all research on the subject would be shielded from public view, Konaev said.

Finding the right amount of trust

Both extremes of trust can be a problem, of course. The military wants personnel to thoroughly trust AI-driven technology that is designed to provide an advantage, like an unmanned ground vehicle trained to follow in a convoy of manned vehicles. But too much trust can also be a problem. If troops over-trust a system, it could lead them to abdicate their position “in the loop” of a battlefield decision.

“If the person doesn’t trust the system that is providing recommendations, then we are losing a lot of money that went into developing these technologies,” Konaev said.

“Proper calibration” is critical, Konaev said, and to start to understand how to calibrate, the institution needs to devote more research dollars to the subject.

Advertisement

Building trust with ‘safety and security’

Trust also comes with a technical component. Investing in parts of AI development that encourage trust, like testing and explainability, will also help ensure the successful teaming of humans and machines. If a person can better understand how a machine arrived at a decision and know it was reliably tested, they will be more likely to trust it. Konaev said the data she and her team analyzed did include many references to the technical side of trust, but there was little consistency on the subject.

“We couldn’t conclude that that is a consistent theme through U.S. military research,” she said of research into parts of AI development like testing, explainability and reliability.

The technical angle is about certifying the “safety and security” of the technology, Konaev said. For example, if a database that DOD is using the train an algorithm is not secure and potentially exposed to manipulation, the resulting algorithm not likely to be trusted. While DOD leaders show an overall interest in guaranteeing the integrity of data, the department does not appear to make the topic a research priority, the CSET report said.

Both elements of trust — calibrating the human side and ensuring the technical safety and security — are only going to become more critical as AI-enabled systems get closer to the battlefield. Much of the DOD’s AI research projects started with low-risk, back-end systems like advancing the use of automation for financial systems. But now, the Joint AI Center in DOD has put the majority of its funding behind its “Joint Warfighting Mission Initiative,” and the Army has been renewing pushes for unmanned/manned teamed vehicles.

Advertisement

“We are without a doubt talking about AI-enabled systems that are going to go into the field,” Konaev said.

Latest Podcasts