Advertisement

Study finds biases in Amazon Rekognition’s facial analysis tool

The main focus of the study is on whether publicly calling out biased algorithms actually leads companies to make improvements.
surveillance facial recognition biometric camera
(Getty Images)

A recently released study by researchers at the MIT Media Lab tested Amazon Rekognition’s “facial analysis” tool on photos of men and women with a range of lighter and darker skin tones and found that the software misidentified dark-skinned women as men 31 percent of the time. Error rates were higher for women overall when compared with men — the software had no trouble correctly identifying men with light skin tones.

The study raises still more questions about race- and gender-based bias in the algorithms that Amazon has been continuously attempting to sell to federal agencies and police departments.

Facial analysis is a service that Amazon Rekognition offers, but it’s not the same as facial recognition. Per Rekognition’s FAQ page, “facial analysis is the process of detecting a face within an image and extracting relevant face attributes from it.” The software can, for example, tell the user the gender and emotions of the face, as well as things like whether it is wearing glasses or has facial hair.

Facial recognition, by contrast, “is the process of identifying or verifying a person’s identity by searching for their face in a collection of faces.”

Advertisement

This distinction is Amazon’s first of several pushbacks on the results of the study. “It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Matt Wood, the general manager of artificial intelligence at Amazon Web Services, said in a statement emailed to FedScoop. He added that the study used an outdated version of Rekognition, and the algorithms have since been updated and improved.

Wood said that a similar internal test of the facial recognition component yielded very different results. “Using an up-to-date version of Amazon Rekognition with similar data downloaded from parliamentary websites and the Megaface data set of 1 million images, we found exactly zero false positive matches with the recommended 99 percent confidence threshold,” he said.

But this distinction is unlikely to assuage the concerns of civil rights and civil liberties advocates, who worry that there isn’t enough regulation in place to protect against a kind of compounding human and algorithmic bias.

“Even if the Amazon Rekognition services and products you are selling to police departments were completely flawless, the potential for abuse on historically marginalized communities would not be reduced,” MIT Media Lab researcher Joy Buolamwini wrote in a letter to Amazon in June.

Interestingly, the MIT study isn’t about bias in facial analysis algorithms per se, but about how companies respond to the publication of studies that show biased results. It shows how Microsoft, Face++ and IBM, all purveyors of software that Buolamwini tested in a previous study, responded to that study by releasing improved algorithms. In this context, Amazon Recognition, and Miami-based company Kairos, represent a kind of control group.

Advertisement

The study finds that publicly discussing bias seems to be an effective mechanism to get companies to reduce that bias. “By highlighting the issue of classification performance disparities and amplifying public awareness, the study was able to motivate companies to prioritize the issue and yield significant improvements within 7 months,” it states.

The study also represents a growing interest in auditing facial recognition and artificial intelligence algorithms. The National Institute of Standards and Technology recently released the results of such an audit of selected facial recognition algorithms and found that the algorithms are getting better over time.

Wood expressed Amazon’s support for this line of study: “We continue to seek input and feedback to constantly improve this technology, and support the creation of third-party evaluations, data sets, and benchmarks,” he said. “Accuracy, bias, and the appropriate use of confidence levels are areas of focus for AWS, and we’re grateful to customers and academics who contribute to improving these technologies.”

Latest Podcasts