Task force shares its preliminary vision for a National AI Research Resource

Lynne Parker speaks March 11, 2020, at the IT Modernization Summit presented by FedScoop. (Scoop News Group)

Share

Written by

The task force charged with making recommendations for a National Artificial Intelligence Research Resource made its vision for such an entity available for public comment in an interim report released Wednesday.

A NAIRR should be a federated cyber-infrastructure ecosystem accessed via an integrated portal and run by a nongovernmental management entity, as well as operations and oversight advisory bodies, per the report.

“AI is transforming our world, and a growing resource divide between those who have access to the resources needed to pursue cutting-edge AI and those who don’t threatens our nation’s ability to cultivate a research community and workforce that reflects America’s rich diversity,” Lynne Parker, director of the National AI Initiative Office and task force co-chair, said on a press call. “It could also prevent us from harnessing AI in a manner that benefits all Americans.”

The National AI Initiative Act of 2020 directed the task force to develop recommendations and an implementation plan by December 2022. Obtaining public feedback on the interim report is the next step toward finalizing a NAIRR roadmap.

“The researchers that will be using the resource are the people that it’s very important for us to hear from,” Parker said. “This resource is intended for the research community, primarily academia.”

That’s especially true of people who the NAIRR would enable to enter the AI research-and-development space, but also startups that have received federal grants through the Small Business Innovation Research and Small Business Technology Transfer programs, Parker added.

Within its vision the task force recommends that multiple agencies receive funding to support NAIRR researchers and management, with their representatives cooperating on oversight.

Both government and the private sector should make computational systems, data and testbeds available through the NAIRR user access portal.

Systems should consist of on-premise and commercial computational services like conventional compute; computing clusters; and high-performance, cloud and edge computing.

The NAIRR should incentivize contribution of high-quality data for AI R&D from a network of trusted providers while establishing a value system, search and discovery, and an access model if sensitive and confidential data are ultimately permitted.

AI testbeds and test datasets should be cataloged and made accessible.

The task force further recommends that users be able to access software training through the portal, receive help desk and solutions consulting, and join communities of practice for peer-to-peer support.

A NAIRR should rely on a zero-trust architecture, routine monitoring and a dedicated staff of experts for security along with resource management standards devised by external advisory groups. Researchers should receive hands-on security training and have their access to NAIRR resources subject to use agreement, according to the report.

Whatever NAIRR management entity the government goes with should first prove research will be approved, performed and reviewed with privacy, civil rights and liberties in mind — pulling from existing agency policies. An ethics review process should be established for both resources and research.

“Once we enable the research, we want to monitor it and in a transparent way make sure that these things continue to be addressed as the research proceeds and in the research results that come out of it,” said Manish Parashar, director of the Office of Advanced Cyberinfrastructure at the National Science Foundation and task force co-chair. “So a big part of our thinking is transparency and oversight in the process.”

The task force also recommends users complete regular ethics training modules before accessing NAIRR resources, and that resources be allocated for the research of AI trustworthiness and the development of best practices for working responsibly with data and models.

Parashar said he couldn’t name specific examples of nongovernmental management entities handling computing infrastructure currently, but there are examples for large, shared instrumentation like the National Science Foundation (NSF) major facilities’ structure.

The White House Office of Science and Technology Policy and NSF will hold a public listening session on the task force’s findings on June 23, and people have until June 30, to respond to their request for information on the interim report.

-In this Story-

artificial intelligence (AI), data, Lynne Parker, Manish Parashar, National AI Initiative Office, National Artificial Intelligence Research Resource (NAIRR), National Science Foundation (NSF), Office of Advanced Cyberinfrastructure, privacy, trustworthy AI, White House, zero trust
TwitterFacebookLinkedInRedditGmail