Advertisement

Getting AI to ‘do more for less money’ will take up-front investment, experts tell lawmakers

It's not just about funding research and development, it's also about training the right people, witnesses told a House subcommittee.
(Pixabay / CC0)

In order to leverage artificial intelligence and reap any cost-saving benefits, the government must invest more in the research and the people who can propel the technology, academics and private sector experts told lawmakers at a hearing Wednesday.

It’s not just a matter of making things better for federal agencies — it’s also about the global race for developing the technology, witnesses said at the hearing, which was the first of three in a series the House Oversight Subcommittee on IT is holding in the coming months on how the government can best promote, adopt and regulate AI.

“I’m most excited by the possibility of using AI in the government to defend our infrastructure and have better decision making because of the analytics that AI can run,” said subcommittee Chairman Will Hurd, R-Texas. “In an environment of tightening resources artificial intelligence can help us do more for less money and help to provide better citizen facing services.”

The panelists were in agreement that the field needs more government funding.

Advertisement

“Much more than China,” said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence.

It’s also training people who know what to do with the technology, said Amir Khosrowshahi, vice presidents of Intel’s AI division.

“Federal funding for basic scientific research that flows through agencies like the National Science Foundation to universities and helps to both train graduate-level scientists and contribute to our scientific knowledge base on which industry can build are key,” Khosrowshahi said.

Other governments indeed are aggressively increasing such research funding, said Ian Buck, vice president and general manager of accelerated computing at NVIDIA, but federal funding has been relatively flat.

“We should boost research funding through agencies like the NSF, [National Institutes of Health, and [Defense Advanced Research Projects Agency],” Buck said.

Advertisement

“Whatever you come up with, it’s 10 times that number,” Charles Isbell, senior associate dean of Georgia Tech’s College of Computing, told the lawmakers. Isbell also stressed the need to spur growth of education in AI. “Even if you spend all of that money, we are not going to be able to have enough teachers who are going to be able to reach enough 10th graders in the time that we’re going to need to develop the next generation workforce.”

The largest obstacles they see in implementing AI range from the organization of government open data to dealing with global adversaries, the witnesses said.

“We just spent a decade talking about big data. And as far as I can tell we’ve largely just collected data and not really done much with it. You now have a tool that can take all that data you’ve collected and really make some meaningful insights,” Buck said. “I certainly would make sure not only that the federal government has an AI policy but that also has a sister data policy as well to organize and make that data actionable and consumable by AI.”

Robin Kelly of Illinois, the subcommittee’s ranking Democrat, expressed concern about the potential for unintentional discrimination in the way AI processes information about people, especially when it comes to race.

“Research has found considerable flaws and biases can exist in the the algorithms that support AI systems, calling into question the accuracy of such systems and its potential for the unequal treatment of some Americans,” she said.

Advertisement

Buck tried to allay Kelly’s concerns by saying that such biases can be mitigated when the people behind an AI system take that into consideration.

“There’s nothing biased about AI in itself as a technique. The bias comes from the data that is presented to it,” Buck responded. “A good data scientists never rests until they’ve look a every angle to discover that bias.”

Another challenge, according to Isbell, is the need for long-lasting AI systems that are resilient against malicious actors looking to exploit or compromise .

“We typically build AI systems that don’t worry about adversaries,” Isbell said. “We’re not spending a lot of energy think about what happens when we send these things into the wild to deploy them and other people know that they’re out there and they’re changing their behavior in order to fool them.”

Despite that challenge, the experts said that AI has a lot of promise in being applied to cybersecurity. Rep. Gerry Connolly, D-Va., asked whether AI can detect cyberthreats before they manifest.

Advertisement

“There are many startups trying to apply AI to this problem. It’s a natural fit,” Buck said. “AI is, by nature, pattern matching. It can identify patterns and call out when things don’t match that pattern. Malware is exactly that way. Suspicious network traffic is that way.”

The subcommittee plans to have a hearing in March to ask government officials how they’re already adopting AI. A hearing in April will focus on to best regulate the use of AI.

Latest Podcasts