Connect with us

The Online Technology

A US government study confirms most face recognition systems are racist


Explore Technology

A US government study confirms most face recognition systems are racist

[ad_1]

Almost 200 face recognition algorithms—a majority of the industry—had worse performance on non-white faces, according to a landmark study.

What they tested: The National Institute of Standards and Technology (NIST) tested every algorithm on two of the most common tasks for face recognition. The first, known as “one-to-one” matching, involves matching a photo of someone to another photo of them in a database. This is used to unlock smartphones or check passports, for example. The second, known as “one-to-many” searching, involves determining whether a photo of someone has any match in a database. This is often used by police departments to identify suspects in an investigation.

The agency used four face datasets currently used in US government applications: mugshots of people living in the US; application photos from people applying for immigration benefits; application photos from people applying for visas; and photos of people as they crossed the border into the US. In total, the datasets included 18.27 million images of 8.49 million people.

What they found: NIST shared some high level results from the study. The main ones were:

1. For one-to-one matching, most systems had a higher rate of false positive matches for Asian and African American faces over Caucasian faces, sometimes by a factor of 10 or even 100. In other words, they were more likely to find a match when there wasn’t one.

2. This changed for face recognition algorithms developed in Asian countries, which saw very little difference in false positives between Asian and Caucasian faces.

Real Life. Real News. Real Voices

Help us tell more of the stories that matter

Become a founding member

3. Algorithms developed in the US were all consistently bad at matching Asian, African American, and Native American faces. Native Americans suffered the highest false positive rates.

4. For one-to-many matching, systems had the worst false positive rates for African American women, which puts them at the highest risk for being falsely accused of a crime.

Why this matters: The use of face recognition systems has rapidly spread in society, including in law enforcement and border control. While several academic studies have previously shown popular commercial systems to be biased across race and gender, NIST’s study is the most comprehensive evaluation to date and confirms these earlier results. The findings call into question whether these systems should continue to be so widely used.

Next steps: It’s now up to policymakers to figure out the best way to regulate these technologies. NIST also urges face recognition developers to conduct more research into how these biases could be mitigated.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It’s free.

[ad_2]

Source link

Subscribe to the newsletter news

We hate SPAM and promise to keep your email address safe

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To Top