Advertisement

Pentagon adopts ethical principles for artificial intelligence

The final principles are mostly copied and pasted from the Defense Innovation Board’s recommendations submitted to Secretary of Defense Mark Esper in October.
DOD, Department of Defense, Pentagon, military
(DOD / Army Sgt. Amber I. Smith)

The Department of Defense has officially adopted principles for the ethical use of artificial intelligence with a focus on ensuring the military can retain full control and understanding over how machines make decisions, it announced Monday.

The final principles are largely unchanged from the Defense Innovation Board’s recommendations submitted to Secretary of Defense Mark Esper in October. The DOD made some tweaks to meet the department’s legal standards and changed a few “shoulds” to “wills,” Lt. Gen. Jack Shanahan, the director of the Joint AI Center, said at a press conference Monday.

“We believe the nation that successfully implements AI principles will lead in AI for many years,” Shanahan said.

The five principles are:

Advertisement
  • Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities
  • Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Adopting AI has been one of Esper’s top technological priorities since before his confirmation. Formally adopting these principles helps the DOD move forward in the fielding of its own AI applications and having a solid framework to work with private industry, Shanahan said.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Secretary Esper said in a press release.

The DOD already has layers of internal policies and laws it must abide by in developing AI for combat usage, but these principles apply across the board to all AI applications and developments. These principles will also impact how it works with the private sector, which has at times balked at some of the DOD initiatives like Project Maven.

“I almost look at these as confidence-building measures,” Shanahan said.

Advertisement

The DOD plans to begin putting non-enforceable language that reflects the new principles in contracts that involve AI.

DOD’s internal AI developments remain in early stages with the department’s Joint AI Center still struggling with basic foundations, according to a RAND report from late last year. The JAIC will lead the coordination of the principles across the department and its work with the private sector. The military services have their own AI incubators and task forces as well that will now have formal ethical principles to work with.

According to the Pentagon, the principles also align closely with the White House’s American AI Initiative, which President Donald Trump launched last February in an executive order.

Latest Podcasts