DOD organizations plot implementation of ethical AI in new guidance

(Defense Department / Flickr)

Share

Written by

As the Department of Defense expands its use of artificial intelligence, two key innovation offices have developed implementation guidance for the ethical use of the technology for the department and its contractors.

The DOD’s Joint AI Center (JAIC) plans to deliver finalized departmentwide implementation guidance on the ethical use of AI by Dec. 15 to the deputy secretary of defense, center spokeswoman Cmdr. Sarah Flaherty told FedScoop.

Meanwhile, the Defense Innovation Unit this week released detailed artificial intelligence ethics guidance for contractors that want to work with it, the first such guidelines in the DOD procurement ecosystem.

The JAIC’s overarching implementation guidance document was due in mid-October, per a memo signed by the deputy secretary of defense. Around the same time that the center missed the original deadline, the JAIC’s chief of responsible AI, Alka Patel, departed the organization. She has since been replaced in the interim by Sunmin Kim, who also serves as the JAIC’s policy chief.

Both frameworks come after the JAIC issued ethical AI principles for the department in February 2020. Those principles — that AI should be responsible, equitable, traceable, reliable and governable — form the basis of the guidance for both organizations.

DIU’s 33-page document was developed in coordination with the JAIC and provides step-by-step guidance on what DIU wants to see from contractors that develop AI. The guidance applies only to DIU acquisitions, which make up a small fraction of the military’s procurement in the form of emerging technology prototype contracts.

“The DIU [Responsible AI] Guidelines are intended to operationalize the high-level, department-wide DoD Ethical Principles in a detailed, but adaptable way on acquisition programs run by DIU,” a DIU spokesperson told FedScoop.

DIU’s guidelines break down ethical AI development into three phases: planning, development and deployment. Each contains expectations for what developers should do to ensure proper outcomes for AI. Many of the steps focus on implementing best practices around protecting data from attack or preventing biases.

“More data is not always better: training an algorithm on historical data may recreate historical biases around sex or race,” the document states.

Other steps listed in the document include continuous validation and functional testing. Jared Dunnmon, DIU’s AI tech director, said the guidance can be incorporated into statements of work as well as contractual milestones as means to ensure it is followed.

The guidance is clear that DIU’s risk-friendly mindset does not apply to the ethical applications of AI.

“Some systems that are proposed for national security use cases may have no route to responsible deployment—deciding not to pursue an AI capability should be an acceptable outcome of adhering to the RAI Guidelines,” the document states.

As the central coordinator of ethical AI across the department, the JAIC is promoting a “bottom-up execution,” similar to what DIU has done, Flaherty said.

It’s unclear if other units are working on different ways to “operationalize” the ethics principles, but the JAIC will remain the central coordinator of policy and implementation guidance for the department, said Flaherty.

“The JAIC’s role is to scale these different efforts and ensure they are coordinated under the [Responsible AI] Working Council and forthcoming DoD-wide Strategy & Implementation Plan,” she said.

-In this Story-

artificial intelligence (AI), Defense Innovation Unit (DIU), Department of Defense (DOD), ethics
TwitterFacebookLinkedInRedditGmail