Excerpt from Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System, Copyright © 2021 by Patrick K. Lin. Reprinted here with permission.


One of facial recognition’s first appearances on the U.S. public stage was at Super Bowl XXXV in 2001, where law enforcement officials used the technology on crowds at the event, scanning faces and comparing them to criminal mugshots. That year also saw the first widespread police use of facial recognition with a database operated by the Pinellas County Sheriff’s Office, now one of the largest local databases in the country.

In 2011, facial recognition was used to confirm the identity of Osama bin Laden. In 2014, Edward Snowden released documents showing the extent to which the U.S. government was collecting images to build a federal facial recognition database. In 2017, President Donald Trump issued an executive order expediting the use of facial recognition at U.S. borders and ports of entry, including airports.

Over the course of two decades, facial recognition technology went from a novel technology to an everyday staple. Thanks to improvements in computing power and developments in machine learning, namely neural networks, facial recognition became a standard feature.

The Center on Privacy and Technology at Georgetown Law studied the widespread use of facial recognition, particularly in the law enforcement context. In 2016, they published a report titled “The Perpetual Line-up.” The key takeaway? One in two American adults is in a facial recognition database network that can be searched by police departments without a warrant. Back in 2016, law enforcement facial recognition affected 117 million American adults. This technology was also unregulated and, even to this day, many have not been tested for accuracy on different groups of people. What happened to Williams in 2020 already tells us how misidentification can subject innocent people to police scrutiny and erroneous criminal charges.

Since this report, the facial recognition market has only gotten bigger and perhaps even more opaque. Clare Garvie is one of the authors of “The Perpetual Line-up.” She is a senior associate with the Center on Privacy & Technology at Georgetown Law whose research focuses on the use of facial recognition-derived evidence in criminal cases and the ways activists, public defenders, and policymakers can ensure the technology is under control.

Garvie found that right around the time the report was published, many cities and police departments were interested in piloting different face surveillance programs. Parallel to the growing interest in its surveillance capabilities, facial recognition was also being used more and more as an investigative tool. When I asked Garvie about how law enforcement use of facial recognition technology has changed since the 2016 report, she said she noticed a new trend.

“Because of widespread public pressure, particularly condemning face surveillance, many face surveillance programs have pretty much disappeared from public agencies,” Garvie said. “I think law enforcement agencies would rather drop the novel biometric surveillance system in favor of retaining the investigative system.”

“Investigative” may not have the same Orwellian ring to it as “surveillance,” but there’s still plenty to be worried about. The infamous Clearview AI has been making headlines the last few years for developing a facial recognition app that goes further than anything ever built by the U.S. government or Silicon Valley tech giants. Clearview’s system relies on a database of more than three billion images scraped from Facebook, YouTube, Venmo, and millions of other websites. Users upload a picture of someone, and the app returns public photos of that person as well as links to where those photos appeared, typically social media profiles.

Clearview has been selling this app to law enforcement agencies at the state and federal level, including the FBI and Department of Homeland Security.


But how does facial recognition even work? What is facial recognition?

First and foremost, facial recognition is a type of biometric identification. Biometrics are unique markers that either identify or verify someone’s identity using their intrinsic physical or behavior qualities. Fingerprints, for example, are probably the most well-known biometric, and law enforcement agencies have regularly used fingerprints to identify people for well over a century. Other biometrics that arebecoming more and more common include DNA, iris scans, voiceprints, a person’s gait, and, of course, facial recognition.

Facial recognition algorithms extract identifying features from the face. These peaks and valleys that make up human facial features are called “nodal points,” and the algorithm identifies and measures them to determine an individual’s identifying characteristics, such as distance between the eyes, width of the nose, shape of cheekbones, and length of the jawline.

Many facial recognition systems define which features are the best indicators of similarity through machine learning. During this learning process, an algorithm designed for facial recognition is fed pairs of face images of the same person. By repeatedly comparing different faces, the algorithm learns to pay more attention to the features that were the most reliable signals that the two images contained the same person.

The diversity of faces used to train an algorithm can influence the kinds of photos and faces that an algorithm is most adept at examining. If the set of face images is skewed towards a certain race, the algorithm may be better at identifying members of that group as compared to individuals of other groups.

Facial recognition systems are generally designed to perform one of three tasks. The first type of system may be designed to identify an unknown person. For instance, a police officer would use a system like this to identify an unknown person in surveillance camera footage. Second, a facial recognition system can be designed to verify the identity of a known person. Smartphones use this type of system to enable users to rely on facial recognition to unlock their phones. The third type of facial recognition system is set up to look for multiple specific, previously identified faces. These types of systems may be used to recognize wanted persons on a crowded street or subway platform.

Facial recognition and other surveillance tools have been getting a lot of buzz lately, but state and local police began using facial recognition technology in the early 2000s. While the early systems were notoriously unreliable, today’s law enforcement agencies have either acquired or are actively considering more sophisticated surveillance camera systems.

Some surveillance camera systems can capture the faces of passersby and identify them in real-time. Police officers can also submit images of people’s faces, taken in the field or lifted from photos or video, and instantaneously compare them to photos in government databases, including mugshots, jail booking records, and driver’s licenses.

With the click of a button, today’s police departments can identify a suspect caught committing a crime on camera, verify a driver’s identity when they do not produce a license, or search for suspected fugitives in a state driver’s license database.

The Pinellas County Sheriff’s Office’s facial recognition program is known as Face Analysis Comparison & Examination System (FACES) and it searches over 33 million faces, including 22 million Florida driver’s license and ID photos and over 11 million law enforcement photos. Florida’s database is searched 8,000 times per month and the Florida police do not need reasonable suspicion to run a search.

Unlike DNA evidence, which is costly and can take a laboratory days to produce, facial recognition is inexpensive and convenient once a system is installed. This relatively lower barrier to entry enables the police to incorporate the technology into their day-to-day work. Instead of reserving it for serious or high-profile cases, officers are using facial recognition to solve routine crimes and to quickly identify people perceived to be suspicious.

The FBI quietly developed a massive facial recognition system, which became fully operational in April 2015. A U.S. Government Accountability Office report (GAO-19-579T) published in June 2019 indicates that the FBI can draw from over 641 million photos in its facial recognition database. The FBI regularly uses facial recognition systems to identify individuals during the course of their investigations.

As of July 2019, twenty-one states allow federal agencies, like the FBI, to run searches of driver’s license and identification photo databases. In February 2020, the Department of Homeland Security said that more than 43.7 million people in the U.S. have been scanned by facial recognition technology, primarily to check the identity of people boarding flights and cruises and crossing borders.

Market research firm Grand View Research published a report in May 2021 that predicts that the market for facial recognition technology will grow at an annual rate of 14.5 percent between 2020 and 2027, driven by “rising adoption of the technology by the law enforcement sector.” In spite of its rapid adoption over the past two decades, facial recognition systems used by police are not required to undergo public or independent testing to determine accuracy or check for bias before being deployed on everyday citizens. Worse yet, when vendors do agree to have their products tested by government agencies like the National Institute of Standards and Technology, many products used by police are found to have a pattern of racial bias.

“If you look at the top three companies [in the field], none of them perform with 100% accuracy. So, we’re experimenting in real time with real humans,” said Rashida Richardson, director of policy research at the AI Now Institute. Amazon’s face-ID system Rekognition once identified Oprah Winfrey as male, while Microsoft’s facial recognition system made the same error with Michelle Obama. Rekognition also incorrectly matched twenty-eight members of Congress with people who have been arrested for a crime.

Looking at instances in which an algorithm wrongly identified two different people as the same person, a 2019 study published by NIST found that for facial recognition systems developed in the U.S., error rates were highest in West and East African and East Asian people, and lowest in Eastern European individuals.

Repeating this exercise across a U.S. mugshot database, NIST researchers found that algorithms had the highest error rates for Indigenous people as well as high rates for Asian and Black women. Given how often facial recognition systems get it wrong, this technology can entrench and enhance systemic bias in policing.

Bias in facial recognition is especially disturbing given that policing practices, such as stop and frisk and the “war on drugs,” have historically and systematically harmed poor communities of color, particularly Black communities.

In the U.S., Black people are more than twice as likely to be arrested than any other race and, by some estimates, up to two-and-a-half times more likely to be targeted by police surveillance. Not only are Black people more likely to be misidentified by facial recognition systems used by police, they are also more likely to be enrolled in those systems and be subject to their processing. This overrepresentation in both mugshot databases and surveillance photos results in algorithms that consistently perform worse on Black people than on white people.

Even if false-positive match rates improve, “unfair use of facial recognition technology cannot be fixed with a software patch.” Accurate facial recognition can still be used in disturbing and nefarious ways. For instance, the Baltimore police department used facial recognition to identify and arrest people who attended the 2015 protests against police misconduct that followed Freddie Gray’s death in Baltimore. Additionally, ICE is interested in driver’s license databases because several states issue driver’s licenses to residents regardless of their immigration status.

For example, in Maryland, a state that grants special driver’s licenses to undocumented immigrants, ICE has used facial recognition software to scan millions of Maryland driver’s license photos without a warrant or any other form of state or court approval. “These states have never told undocumented people that when they apply for a driver’s license they are also turning over their face to ICE,” Harrison Rudolph, from Georgetown Law’s Center on Privacy and Technology, said. “That is a huge bait and switch.”


Copyright © 2021 by Patrick K. Lin. This excerpt originally appeared in Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System published by New Degree Press. Reprinted here with permission.

Click here to read an interview about Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System between Patrick K. Lin and Public Seminar intern Miko Yoshida.


Patrick K. Lin is a New York City–based author focused on researching technology law and policy, artificial intelligence, surveillance, and predictive algorithms.