But so far, with artificial intelligence still developing, the forerunners of those autonomous teammates have been hard to operate. Soldiers currently need to be “heads down, hands full,” operating clunky joysticks or working with other robotic hardware that doesn’t allow for the seamless integration into operations.
So the Army is trying another idea for operating the existing technology: conversation. Researchers are developing a voice-controlled “dialog system” called Joint Understanding and Dialogue Interface (JUDI), which uses natural language processing to allow two-way, hands-free communication between a soldier and robotic technology. The system, once scaled, could be a critical part of realizing the military’s multi-domain operations concept by having robots receive and deliver integrated information about an operating environment in easy-to-understand formats.
“We want to just prove that robots can interpret language,” Matthew Marge, an Army Research Lab scientist working on the JUDI project, said in an interview.
At the core of JUDI is a machine learning-based natural language processing system that is paired with other forms of AI to derive intent and turn commands into action. That action, through more AI-enabled systems, can then voice a robot’s actions back to the user’s headset so a solider does not have to look at a screen to know where the robot is.
“JUDI can also relay messages back from a robot,” Marge said.
It is trained on a data set of collected voice samples from a series of experiments conducted to simulate how service members would talk in real-world scenarios. Marge said that collecting training data that is as close to the “natural” language is critically important for the system’s overall accuracy in the field.
“We shouldn’t make assumptions about how people might speak,” he said.
JUDI is being designed to run completely at the edge with no need to access cloud-based data. But as the system grows more accurate, it could offer new capabilities for leveraging the Army’s new multi-domain operations concept of warfare, Marge said. The idea is to simultaneously handle the complexity of air, land, sea, space and cyberspace.
A fully functional, scaled dialog system could support command centers, too. Robots that can sense their surroundings and turn it into coherent data will be critical for the Army, Marge said.
As the team in at the research lab continues to test the system, it is measuring how many back-and-forths it takes to get a message across to the robot and have it turn it into an executed maneuver. The research team is currently collaborating with the Army’s office for Artificial Intelligence for Maneuver and Mobility (AIMM) and the University of Southern California for data collection.
“We are looking into ways robot can be taught things on the fly,” Marge said, “like [understanding] novel objects of novel actions.”