Advertisement

Experts advise lawmakers on how to overcome AI’s ethical challenges

AI experts told lawmakers that the U.S. is in a leadership position to develop ethical norms for AI but can lose it if it doesn't act on it.
(Getty Images)

The prospect of boosting implementation of artificial intelligence in and out of the U.S. government has lawmakers searching for answers on how to overcome ethical and privacy challenges that come with the technology, as well as how to compete with the rest of the world.

Representatives from technology trade groups and think tanks told members of the House Oversight and Government Reform IT Subcommittee at a Wednesday hearing that the U.S. is in a leadership position to develop ethical norms surrounding AI, but it’s an advantage easily lost if the country sleeps on it.

“We must develop a broad set of ethical norms governing the use of this technology, as I believe existing regulatory tools are and will be insufficient,” said Jack Clark, director of the think tank OpenAI. “The technology is simply developing far too quickly.”

The lawmakers and witnesses concurred that the government needs metrics to understand AI’s full capabilities and determine when it can be an advantage versus a threat. As a way to come up with those metrics, Clark promoted the idea of government-sponsored competitions to see the technology in action.

Advertisement

“It’s not clear yet whether [AI’s capabilities] favor the defender or the attacker, and this is why I think that hosting competitions, having government measure these capabilities as they develop, will give us a kind of early warning system,” he said. “You know, if there’s something really bad that’s about to happen as a consequence of an AI capability, I’d like to know about it and I’d like an organization or an agency to be telling us about that.”

Drawing on this duality of AI, Rep. Darrell Issa, R-Calif., asked whether the U.S. should place export controls on it to avoid having it weaponized against the country. When the U.S. saw itself as having a favorable advantage in satellite and nuclear technology, for example, it placed export controls on those things, Issa said.

Clark argued that such a practice would be futile for a concept as free-flowing and accessible as AI.

“It runs on consumer hardware, it’s embodied in software, it’s based on math that you can learn in high school. You can’t really regulate a lot of aspects of the fundamental AI development because it comes from technology which 17 year olds are taught in every country in the world,” he said.

The subcommittee also wanted to know about AI’s implications on data privacy, as the technology is increasingly used by companies to amass marketable information about consumers.

Advertisement

The fact that AI works better as more data gets fed to it comes with a caveat, said Ben Buchanan, a fellow at the Harvard University Belfer Center for Science and International Affairs.

“Much of that data that might — and I emphasize ‘might’ — be useful for future machine learning systems is intensely personal, revealing and appropriately private,” Buchanan said. “Too frequently the allure of gathering more data to feed a machine learning system distracts from the harms that collecting that data brings. There is a risk of breaches by hackers, of misuse by those who collect or store the data, and of secondary use, in which data is collected for one purpose and later re-appropriated for another.”

Facebook has recently been embroiled in a scandal whereby political data firm Cambridge Analytica was discovered to have to improperly obtained a trove of data profiles on 87 million Facebook users for political purposes. This week, a similar firm called LocalBlox was revealed to have left 48 million of its records exposed on an open server.

Ranking subcommittee member Robin Kelly, D-Ill., asked Buchanan whether companies that currently amass private consumer data for AI purposes have those risks in mind. Buchanan suggested there’s a ways to go.

“Certainly the number of breaches that we’ve seen in recent years, including very large data sets, such as Equifax, suggests to me that there’s a lot more work that needs to be done in general for cybersecurity and data protection,” Buchanan said. “Frankly, I think some companies want to have the data, want to aggregate the data and see the data, and that’s part of their business model. And that’s an incentive for those companies not to pursue these.”

Advertisement

The hearing was the final one in a three-part series the subcommittee held on AI. In the previous hearings, the lawmakers heard from industry representatives and from government agencies on how Congress can promote the development of AI and leverage it to tackle the government’s technological challenges. Chairman Will Hurd, R-Texas., said the subcommittee plans to release a report soon outlining lessons learned from series on how the government can help advance AI in a beneficial way.

Latest Podcasts