Advertisement

Congress wants the Intelligence Community to weigh in on how to counter ‘deepfakes’

Among the lawmakers' questions: Has the technology already been used deceptively by a foreign entity? And what can Congress do?
(Getty Images)

Artificial intelligence can be used to deceptively edit photos or videos in a way that could spell trouble for public confidence in information, and Congress wants to know what’s being done to stop this.

A bipartisan group of representatives, including Reps. Adam Schiff, D-Cali., Stephanie Murphy, D-Fla., and Carlos Curbelo, R-Fla., sent a letter to Director of National Intelligence Dan Coats last week requesting a report on the “implications” of so-called “deepfake” technology.

“You have repeatedly raised the alarm about disinformation campaigns in our elections and other efforts to exacerbate political and social division in our society to weaken our nation,” the letter states. “We are deeply concerned that deep fake technology could soon be deployed by malicious foreign actors.” When used nefariously, the letter asserts, the technology could “undermine public trust in recorded images and videos as objective depictions of reality.”

The requested report, along with an unclassified version for public review, should include an overview of how foreign actors could use deepfake technology, any confirmed or suspected instances of foreign actors using the technology to date, information on countermeasures the government could take or is taking and an assessment of the responsibilities of the intelligence community in this arena. The representatives also want Coats’ takes on what Congress itself can do to rein in the use of deepfakes.

Advertisement

Deepfakes might seem scary, but there are researchers working to combat the use of this technology. The Defense Advanced Research Projects Agency’s (DARPA) Media Forensics (MediFor) team is working to develop a tool that can spot deepfakes even when the human eye can’t.

One way the team can spot a manipulated image, a method developed by Siwei Lyu, a professor at the University of Albany – State University of New York, is to look at the way the face blinks. Because deepfakes are trained on still images, the resulting video is often of a face that blinks in an unnatural way or doesn’t blink at all.

The challenge with counter-deepfake research, though, is that it’s a race against time — each advancement is an opportunity for the forger to get savvier as well.

Meanwhile, on Capitol Hill, the representatives request that Coats deliver his report “as soon as feasible,” or by Dec. 14 at the latest.

Latest Podcasts