The Defense Advanced Research Project Agency has issued a set of tools to help artificial intelligence researchers improve the security of algorithms.
The Guaranteeing AI Robustness against Deception (GARD) program includes several evaluation tools for developers, including a platform called Armory that tests code against a range of known attacks. The tools are open for any researcher to use, and are intended to help prevent foreign powers from accessing databases and code that may be used in weapons.
“Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve,” said Bruce Draper, the program manager leading GARD, said in a release.
As DOD moves closer to using AI in battle, its coders have been focusing on how to ensure its data and algorithms are secure from attack. Hackers that can alter training data, adjust the weights of a neural network or otherwise change an algorithm without being detected could wreak havoc on systems that relies on AI.
Research funded by the Army has identified ways in which subtle modifications to databases could dramatically alter an AI’s ability to preform accurately. It’s a problem other services and the Joint AI Center are also focused on.
The GARD program includes tools to test against data poising, according to DARPA. One set of tools, Adversarial Robustness Toolbox (ART), started as an academic project, but has since been picked up by DARPA for further research.
“Often, researchers and developers believe something will work across a spectrum of attacks, only to realize it lacks robustness against even minor deviations,” the DARPA release states.
The GARD program has researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research working on the program.