Advertisement

DARPA is racing against time to develop a tool that can spot ‘deepfakes’

The group is midway through four years of funding.
A screenshot from Buzzfeed's video demonstrating deepfakes. (Youtube)

Back in April, Buzzfeed published a video of President Barack Obama saying some things that Obama simply never said. With that video, “deepfakes” — the practice of using artificial intelligence to map someone’s head (or in this case, mouth) onto another body — suddenly burst into the public consciousness.

Buzzfeed revealed that the process is pretty easy — the Obama video was created in about 56 hours using a free application called FakeApp. In an era rife with information manipulation big and small, this is scary.

The good news, though, is that there are researchers out there looking at the other side of the equation too — trying to figure out how to use AI to spot these inauthentic images more quickly and effectively than a human can. And the Defense Advanced Research Projects Agency (DARPA) is leading the charge.

DARPA’s Media Forensics (MediFor) team is dedicated to building a tool that can spot deepfakes. The goal, the team’s webpage states, is “to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.”

Advertisement

The program kicked off in 2016 and is funded through 2020. David Doermann, a program manager at MediFor, told NBC News that after 2020, the team hopes to work with Silicon Valley tech companies to integrate the tool directly on their platforms.

It’s a race against time and technology, though. Each increase in sophistication of the fake-video-spotting technology is mirrored by an increase in the capacity of the technology that creates these videos.

For example, one simple method for spotting deepfakes, developed by Siwei Lyu, a professor at the University of Albany – State University of New York, is to look at the way the face blinks. Because deepfakes are trained on still images, the resulting video is often of a face that doesn’t blink, or blinks in an unnatural way.

But this weakness could be remedied, simply, if the video forger also trained the AI with an image of a person blinking. So this method won’t work forever. Lyu told the MIT Technology Review that his team has developed another, better method — one he’s keeping a secret. “I’d rather hold off at least for a little bit,” he said. “We have a little advantage over the forgers right now, and we want to keep that advantage.”

MediFor is just one of DARPA’s many ongoing AI projects. The agency also has initiatives around “explainable AI” — the idea that as artificial intelligence takes over more roles it will need to be able to explain how the algorithm reached the conclusion that it did to an end human user — as well as using big data and machine learning to understand group biases, and more.

Latest Podcasts