Federal agencies have seen some early success adopting artificial intelligence but still face major challenges as they look to apply the technology to business processes and service delivery — things that require explainability — U.S. CIO Suzette Kent said Thursday.
“The journey now is to the much more difficult things,” she said.
So far, most agencies have focused on automating simple manual workloads, taking baby steps in exploring AI. Others have used the technology to “leapfrog” into addressing some “exciting” more but complicated applications, Kent said at a Partnership for Public Service event. “There is not a single agency that’s not doing something.”
It’s in the middle between those two ends of the adoption spectrum, however, that the federal government is “making a little bit slower progress” and will face the biggest challenges that AI will present in the public sector.
The middle is “where we have to cross many silos,” Kent said. “It’s easy to automate a manual task. But when you significantly change a business process, you have to look at the human resources, you have to look at how we engage with customers, you have to look at how we collect data and what the regulatory environment looks like for how we’re collecting it and what we’re using or how we’re using it. We have to retrain a broader swath of employees. The work in the middle that really changes the business process and the service delivery is the tougher work.”
On top of that, there’s the difficulty in providing transparency and explainability when AI provides a solution. With a human at the other end, an agency must be able to show how it got to a decision using artificial intelligence, Kent said.
“As the use cases move from the manual tasks to that middle and the higher end — where we are closer to delivery of services to citizens and decisions about things that we do have more human impact — we have to have a mechanism for explaining and being transparent that is very different than what we have today,” she said.
Using AI to monitor for and identify fraud, for example, is “pretty explainable” Kent said. “We know how that was supposed to go. But when we have to explain the construct of a complex decision with many different data sets, that changes the paradigm.”
Levels of communication
The need for explanation will require better communication skills among the technical teams working with AI in the federal government.
“How we might explain something to a group of scientists and what’s explainable in that population is very different from how we might explain why someone was chosen to receive a food supplement benefit,” Kent said. “That is a set of skills … beyond the technical, beyond the data disciplines, the next step in how we interact with citizens we’re serving.”
The Office of Management and Budget is working through this challenge and developing policy to “serve as guideposts for AI,” Kent said. That includes “controls and expected examination of responsibilities, both of data and what’s in the algorithms, so that we’re starting with safety and we’re starting with careful thought.”
Nevertheless, agencies are doing a “phenomenal job” with the “easy stuff” of AI, she said, urging them to continue on that path while exploring how to move on to addressing this next, thornier middle piece. Those teams using AI are also “much happier because the work they’re doing is a lot more interesting.”
“It’s driving more valuable work. And it’s actually showing us where we need to make more investments in skills,” Kent said. “In almost every agency, what you’ll hear them say is, ‘Yes we are starting with the manual, repetitive tasks.’ But those same individuals are now doing more value-added work.”
The Partnership for Public Service released a report Thursday on building trust and managing risk around the adoption and use of AI.