NIST releases expanded artificial intelligence risk management framework draft

(Getty Images)

Share

Written by

The National Institute of Standards and Technology released an expanded second draft of its artificial intelligence risk management framework with more details on developing trustworthy and responsible AI systems, said a spokesperson Friday.

NIST consulted experts, held discussions and workshops, and solicited comments before clarifying AI Risk Management Framework core outcomes, adding an audience section tied to the Organisation for Economic Co-operation and Development AI system life cycle, and explaining how AI and traditional software risks differ within the guidance.

The OECD AI system life cycle is part of a classification framework developed by the intergovernmental organization to help policymakers and regulators assess opportunities and risks that different types of AI systems present.

The latest iteration follows a first draft — in which NIST recognized for the first time that a socio-technical approach to building and deploying AI systems is needed — in March for voluntary use, and plans to officially publish AI RMF 1.0 in January 2023.

NIST’s second draft simplifies the risk section explaining how agencies and other organizations can establish or reconfigure their risk thresholds, as well as trustworthy AI characteristics and categories.

In addition to spelling out the AI RMF’s benefits, the latest draft invites evaluations of its own effectiveness and contributions of use cases, which would shed light on how risk is being managed in specific sectors or applications.

Along with the second draft, NIST released a new playbook that recommends actions framework users can take to ensure trustworthiness in AI systems’ design, development, deployment and use.

“We include guidance for two of the four functions in the AI RMF and are working on the other two functions,” a NIST spokesperson told FedScoop. “Stakeholder feedback on the early draft will help us as we complete the draft of this online resource.”

Public comments are due via email to AIframework@nist.gov by Sept. 29, 2022, or respondents can save their feedback for a third workshop Oct. 18 and 19, 2022.

-In this Story-

AI Risk Management Framework, artificial intelligence (AI), National Institute of Standards and Technology (NIST), responsible AI, trustworthy AI
TwitterFacebookLinkedInRedditGmail