There is plenty of legitimate concern about the privacy and rights implications of facial recognition technologies and what will happen when, by bias or by technological weakness, the algorithm fails. But what about when it works?
The Department of Homeland Security’s Customs and Border Protection agency put out a triumphant press release this week announcing that new biometric facial matching technology installed at Dulles International Airport caught its first “imposter.”
“On August 22, 2018, a 26-year-old man traveling from Sao Paulo, Brazil presented a French passport to the CBP officer conducting primary inspections,” at the Northern Virginia airport, the release reads. “The officer utilized CBP’s new facial comparison biometric technology which confirmed the man was not a match to the passport he presented.” Later, “a search revealed the man’s authentic Republic of Congo identification card concealed in his shoe.”
Using another person’s identity document to enter the U.S. is a violation of immigration law.
Dulles is one of the 14 airports around the country that are piloting facial recognition technology as a way to speed up inspection of arriving international travelers. The pilot began on Aug. 20, and this is the first imposter that’s been caught.
“Facial recognition technology is an important step forward for CBP in protecting the United States from all types of threats,” Casey Durst, CBP’s Director of the Baltimore Field Office, said in a statement. The tech “virtually eliminates” the opportunity for travelers to falsely use another person’s identity, she added.
It’s an important early success story for a technology that, proponents argue, will eventually make airports safer and more efficient for travelers. But it remains to be seen whether it’ll assuage the critics’ concerns.