Advertisement

NIST methodically releasing guidance on trustworthy AI

A terminology and taxonomy document expected out later this fall will begin a steady stream of reports helping agencies grasp the technical requirements needed to manage AI risk.
trustworthy AI
(Getty Images)

The government’s primary agency for technology standards plans to issue a series of foundational documents on trustworthy artificial intelligence in the coming months, after spending the summer reaching out to companies, researchers and other federal agencies about how to proceed.

The National Institute of Standards and Technology’s AI Program created a testbed for assessing AI vulnerabilities and plans to release its findings sometime in 2021, said Elham Tabassi, chief of staff at the agency’s Information Technology Laboratory.

The agency hopes to establish standards accepted by the international AI community but needs more time to understand the dangers of bias within data and algorithms, as well as how to measure it, Tabassi said.

“While we understand the urgency, we want to take time to make sure that we build the needed scientific foundations,” Tabassi said, during an AI in Government event Thursday. “Otherwise developing standards too soon can hinder AI innovations that allow for evaluations and conformity assessment programs.”

Advertisement

Rushing the process wastes resources and hurts the public’s confidence in AI, Tabassi added.

NIST continues to identify technical requirements for trustworthy AI worth measuring. So far those fall into the categories of accuracy, security, robustness, explainability, objectivity and reliability.

A final draft of the NIST Interagency or Internal Report (NISTIR) 8269 on AI terminology and taxonomy is expected out in late fall, while NISTIR 8312 on AI explainability and interpretability was open for public comment until Oct. 15.

NIST held a Bias in AI workshop on Aug. 18 engaging scientists, engineers, psychologists and lawyers, and a review of the findings began this week. Results will hopefully be published before the end of 2020 or by early February at the latest, Tabassi said.

Still more guidance and workshops are in the works.

Advertisement

“My wish list, how I see this program succeeding, is that we build a resource center — I call it a metrologist’s guide to AI — that talks about everything that you need to consider,” Tabassi said.

Agencies need to know not only the technical requirements behind trustworthy AI but the tradeoffs associated with, say, improving explainability at the expense of accuracy, she added.

NISTIR 8269 will provide the AI community with a common language, but one word you wont find is “ethical.” While discussions of ethical AI are “warranted,” the concept is inherently cultural and influenced by society or corporate environments, Tabassi said.

NIST’s guidance is intended to let agencies decide the levels of trustworthiness and risk they’re comfortable with for each technical requirement in order to come up with the best solution for their respective missions.

“Every user or implementer can make the right choice for themselves,” Tabassi said.

Dave Nyczepir

Written by Dave Nyczepir

Dave Nyczepir is a technology reporter for FedScoop. He was previously the news editor for Route Fifty and, before that, the education reporter for The Desert Sun newspaper in Palm Springs, California. He covered the 2012 campaign cycle as the staff writer for Campaigns & Elections magazine and Maryland’s 2012 legislative session as the politics reporter for Capital News Service at the University of Maryland, College Park, where he earned his master’s of journalism.

Latest Podcasts