Opinion Advocates for ideas and draws conclusions based on the author/producer’s interpretation of facts and data.
Facial Recognition Tech Perpetuates Racial Bias. So Why Are We Still Using It?
Most iPhone users unlock their phones with a quick glance. Many of us have Ring video doorbells to see who’s outside when there’s a knock. We take for granted how Facebook knows every single person to tag in a posted photo.
While this use of facial recognition technology is seemingly convenient—and cool, like something in a science fiction movie—the industry is currently completely unregulated by the federal government.
In this ever-evolving technological world, it is time for both grassroots solutions and federal regulation. Or, at the very least, lawmakers ought to require transparency from the producers of the deeply problematic technology.
How Facial Recognition Fuels Racial Profiling
The most serious danger may be its use by law enforcement and how the lives of Black and Brown communities are subsequently being put at risk. If we don’t act swiftly, we may be in for a real-life episode of Black Mirror—a sci-fi TV show depicting the consequences of a high-tech future—with communities of color at the center of the dystopia.
Here’s why: Black Americans are already more likely than White Americans to be arrested and locked up for minor crimes. As a result, Black people are overrepresented in mug shot data, which is used by facial recognition software to identify suspects accused of committing crimes.
This ultimately creates a feed-forward loop where: 1. racial profiling by police leads to the disproportionate arrest of people of color; 2. facial recognition technology, in turn, uses arrest data (mug shots) borne from discrimination; and 3. that data continues to fuel more racial discrimination via surveillance of communities of color.
In a real-world example of the racist use of the technology, the city of Detroit enacted Project Green Light in 2016, installing cameras with facial recognition software to scoop up data from across the city and stream it directly to the police department. These PGL systems were disproportionately located in majority-Black areas, and reports show that the surveillance is linked to the criminalization of Black and Brown residents and subsequently the loss of public benefits and housing.
How the Technology Is Inherently Racist
Not only is the geographic placement of facial recognition technology by law enforcement blatantly racist; the software itself shows significant bias. A study by the Massachusetts Institute of Technology called “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” found that the software consistently had the most inaccurate results for people who are female, ages 18–30, and Black. Specifically, the facial recognition software performs worse on darker-skinned women, with error rates of more than 40%, compared with White males. This is true across all platforms—from IBM to Microsoft to Amazon—and was corroborated by the federal government.
Another study by the National Institute of Standards and Technology found that the algorithms work best at recognizing middle-aged White men and don’t work as well on children, the elderly, people of color, or women. In fact, error rates tend to be highest for Black women, just as the MIT study found.
The Problematic History of Facial Recognition
The roots of facial recognition technology date back to the 1960s, when Woodrow Wilson Bledsoe began developing a system of measurements to classify photos of faces.
By 2001, law enforcement was using the technology on crowds entering the Super Bowl, comparing the faces of people who walked through the turnstiles to mug shots of known criminals.
In 2014, Facebook unveiled its DeepFace photo-tagging software, and by 2017, Apple introduced its new iPhone X, which utilized the technology as a way for people to unlock their devices. As per the Georgetown Law Center on Privacy and Technology, half of all American adults are in a law enforcement recognition network. So, if you’re sitting on a bus next to someone else, chances are one of you is in the system.
Over the past several years, major tech players like Amazon, IBM, and Microsoft have been selling their facial recognition software to law enforcement for mass surveillance. This unregulated and unchecked system has served to enhance discriminatory practices by law enforcement and further endanger the lives of communities of color.
How the Public Is Fighting Back
On the bright side, there has been serious backlash from privacy rights groups, the general public, universities, and some members of Congress against racial bias in the use of the technology. One creative protest consisted of political activists in London wearing asymmetric makeup in patterns that make it impossible for their faces to be matched to a database. The idea was developed by artist Adam Harvey, who coined the term “computer vision dazzle,” meaning a modern form of the camouflage used in World War I by the Royal Navy.
Responding to increasing public outrage, in June 2020, IBM, Microsoft, and Amazon said they would not sell their technology to law enforcement agencies for a year. The year is now up, and Amazon has extended its ban until further notice, but the fight is far from over.
Meanwhile, cities like San Francisco, Oakland, Boston, and Portland, Oregon, have gone further than the private sector and outlawed the use of citywide surveillance technology, with more cities and states sure to follow suit.
However, because there are currently no federal laws that regulate facial recognition technology, we are depending on piecemeal legislation in cities and states across the country—a flawed solution to a complicated problem.
If the federal government does not step in and officially ban the technology that is disproportionately impacting the lives of Black and Brown communities, at the very least it must require that big-tech companies be transparent about the stark, racist biases in their algorithms.
If not, the storylines on Black Mirror won’t be just fictitious.
Annika Olson
is the Assistant Director of Policy Research for the Institute for Urban Policy Research and Analysis (IUPRA) at UT Austin. She received a dual Master’s degree in Psychology and Public Policy at Georgetown University and her Bachelors in Psychology from the Commonwealth Honors College at UMass Amherst. Annika previously served as an AmeriCorps member with at-risk youth in rural New Mexico and Austin, Texas. She can be reached through her email: [email protected], Twitter, and LinkedIn.
|