Advertisement

Policy experts see missing pieces in agencies’ AI use case disclosures

The Biden administration’s release of AI use case inventories is “a step” in the right direction, but tech policy experts have suggestions for ways to improve the government’s database.
Senate Majority Leader Chuck Schumer, D-N.Y., hosts a bipartisan Artificial Intelligence Insight Forum at the U.S. Capitol in Washington, D.C., on Sept. 13, 2023. (Photo by Stefani Reynolds / AFP via Getty Images)

As the government seeks to regulate artificial intelligence, its work to track applications of the technology across federal agencies would benefit from improvements in formatting, transparency and information included, experts and researchers told FedScoop.

Understanding AI systems — which can be integrated into everything from health care software to public housing security systems — and how they’re used is critically important. Yet federal agencies’ AI use case inventories, which were required by a 2020 executive order and remain one of the primary government programs to facilitate information sharing about this technology, are imperfect, those policy experts tell FedScoop.

“Inventories are a step, right?,” said Frank Torres, a civil rights technology fellow at the Leadership Conference on Civil and Human Rights. “We want to make sure that when inventories are done, that there’s some transparency there, that people can understand how the technology is being used and see the pathway for how it was decided that that particular technology was best for that particular purpose.” 

The inventories have been the subject of growing attention, especially in the wake of a 2022 Stanford report highlighting poor and patchwork compliance. FedScoop has also reported on myriad issues with the disclosures. Still, they could be an important part of the White House’s planned executive order and the Office of Management and Budget’s coming guidance on the technology. Notably, one of the initial intentions for inventories was to help inform policymaking, a former White House official who helped craft the executive order recently told FedScoop. 

Advertisement

Michael Kratsios, managing director at Scale AI and the White House’s chief technology officer under former President Donald Trump, pointed to the importance of previous AI legislation and executive orders during a Wednesday House hearing before Oversight & Investigations and Research & Technology subcommittees of the Committee on Science, Space and Technology.

“I think the biggest challenge is that a lot of the requirements from the executive orders and from the legislation are important foundational pieces that future regulatory structures are built on top of,” Kratsios said in response to a question from Rep. Frank Lucas, R-Okla., about the unfulfilled aspects of those existing policies.

Without a clear understanding of potential use cases in agencies or assessments of where AI will impact the regulatory regime, it’s harder to proceed with crafting the regulation itself, Kratsios said.

The Biden administration recently released a consolidated list of the more than 700 AI use cases disclosed by federal agencies in their individual inventories that, for the first time, makes the information available in a single spreadsheet. Prior to that, agencies’ public inventories varied in terms of format, making them difficult to compare and analyze. But while the new consolidated inventory makes the information available in one place, it lacks standardized responses and omits some of the information that agencies previously reported.

Notably, the number of large, parent-level agencies with known AI use cases that have published an inventory has grown since the initial Stanford report, according to Christie Lawrence, a co-author of the report. Still, just 27 agencies are listed on AI.gov for these inventories, compared to the over 70 parent-level agencies that the researchers estimated were subject to this requirement. There are other issues with the inventories, too. 

Advertisement

“Some agency inventories do not include all the required — and instructive — elements detailed in the CIO guidance,” Lawrence said in an email to FedScoop. “Furthermore, the federal government shouldn’t lose the benefits from its work to-date — keeping inventories from prior years online can help the public, agencies, and researchers understand the trends of the federal government’s use of AI.”

Experts say that there needs to be more clarity about the ideal process for developing and building these use cases. Guidance from the CIO Council states that federal agencies impacted by the executive order are supposed to exclude “sensitive use cases” that might jeopardize privacy, law enforcement, national security or “other protected interests.” At the same time, agencies are ultimately responsible for the accuracy of their inventories, an OMB spokesperson told FedScoop in August. 

There’s also concern, which has been highlighted by the Stanford report authors as well as other researchers, that many agencies are excluding use cases from their inventories that they’ve publicly disclosed elsewhere. And while the Department of Transportation lists use cases that have been publicly redacted, it’s not clear how many AI use cases, in total, most agencies have at their disposal.

“There’s a question about how agencies are deciding what gets included and what doesn’t, and especially when there’s other sources of information that suggests that the federal agency is using AI in a particular way,” said Elizabeth Laird, the Center for Democracy & Technology’s director of equity in civic technology. 

Another potential area of improvement is the amount of information provided about use cases. In advance of an upcoming executive order on AI, a collection of civil rights organizations wrote to the Biden administration in August urging the White House to expand AI inventories to include details gesturing to compliance with the Blueprint for an AI Bill of Rights, a nonbinding set of principles for the technology released by the Office of Science and Technology Policy in October 2022. 

Advertisement

The letter also suggested that each agency be required to submit an annual report documenting its progress in following those principles. 

“As the federal government increasingly thinks about AI procurement, including to fulfill requirements from the AI in Government Act, the inventories can help the federal government gain a better understanding of how to better acquire AI tools,” Lawrence added.

Relatedly, tech policy experts also want more information about vendors and contractors who might be brought in by the government to build or provide AI tools. While this information is included in some agencies’ individual inventories — and is required by the most updated guidance from the CIO Council — it is not included in the consolidated database recently published on AI.gov. 

“A contract ID or the company should make that [more] valuable,” said Ben Winters, senior counsel at the Electronic Information Privacy Center. “This should be able to be like a fiscal accountability tool, and as well as like an operational accountability tool.” 

Inventories would be better, Torres added, if they included more information about how an AI system was tested. Listing all collaborators on a project would also be helpful, noted Anna Blue, a researcher at the Responsible Artificial Intelligence Institute who has reviewed these inventories. 

Advertisement

Of course, broader transparency into government use of AI remains a challenge. Several experts have pointed out that records requests that members of the public can file under the Freedom of Information Act work well for documents, but don’t inherently lend themselves to accessing information about AI systems. 

“Another thing I would call out is the need for a process by which the public can solicit more information,” Blue said. “A solicitation or comment process isn’t part of the EO, but I think it would help create a cycle of helpful feedback between agencies and the public.”

Latest Podcasts