Photo Credit: Frederic Bisson / Flickr
—————
Over the last three and a half years, the City of Detroit has greatly expanded Project Green Light, an initiative of the Detroit Police Department (DPD), along with local businesses and other organizations, to use video surveillance and digital technology to fight crime. Since the first cameras went live in eight gas stations on January 1, 2016, the system has grown as of April 2020 to nearly 700 locations across the city.
Though it is billed by proponents as a “real-time crime-fighting” solution, others, including the DSA, see it as a mass-surveillance system that disproportionately singles out communities of color. In particular, critics cite flaws in the technology behind the project that are part of what sociologist Ruha Benjamin, in her study Race After Technology, terms the “New Jim Code.”
“Real-Time Technology” for Public Safety
Project Green Light has been touted as a highly effective, advanced, technological fix to ensure public safety for the citizens of Detroit and thereby contribute to the city’s revitalization. The system works by connecting a network of high-definition video cameras to a “Real-Time Crime Center” where activity at participating locations is monitored around the clock. Businesses and other organizations pay to install the cameras and agree to connect to the system, which is operated in conjunction with the FBI, the Department of Homeland Security, and private partners, such as DTE Energy, the Downtown Detroit Partnership, and the Gilbert and Ilitch organizations.
In addition to enabling rapid DPD response to situations in progress, the system is said to deter crime by marking participating locations with flashing green lights, signage, and other onsite identification. Among the other preventive measures used by DPD are “Big Data” technologies that model potential crime “hot spots” and can be used to help identify suspects. About three years ago, DPD integrated facial recognition software into Project Green Light.
Artificial Intelligence and Misrecognition
In late 2017 the City of Detroit signed a contract with the Greenville, South Carolina, software company DataWorks Plus, to implement its FACE Plus facial recognition technology, which uses the artificial intelligence of algorithms to search video and image databases. (An algorithm is a set of coded instructions that enables a computer to complete a specific task.) Several studies have suggested that the algorithms used in many facial recognition technologies have a tendency to reinforce preexisting bias, particularly as it pertains to race and gender.
In one famous study conducted by the ACLU, 28 members of Congress were mistakenly identified by Amazon’s Rekognition software system as matching persons arrested for a crime in a mugshot database. On June 10, 2020, the company called a one-year moratorium on its use for law enforcement. Microsoft has also suspended sales of facial recognition software to the police, as has IBM.
A U.S. government study found that even the best facial recognition algorithms misidentify Blacks at five to ten times the rates of whites. In Detroit, at least two individuals, Michael Oliver and Robert Williams, have been wrongly accused through Project Green Light surveillance technologies of involvement with crimes they had nothing to do with.
Technology and the New Jim Code
The bias encoded within artificial intelligence and other data technologies underlies Benjamin’s concept of the “New Jim Code,” which uses research gathered by, among others, the Detroit Community Technology Project. Benjamin identifies key aspects of the New Jim Code implicit in systems such as Project Green Light, including “engineered inequity,” “default discrimination,” and “coded exposure.”
Artificial intelligence and other information technologies operate through something known as “deep-structured learning,” a process that uses algorithms to mimic the structure and functions of the human brain. These algorithms build up layers of data to enable increasingly sophisticated machine-based decision-making, for example, identifying “matches” based on a comparison of digital images used in facial recognition. Social-media filtering is a common form of deep-structured learning, as when Facebook tracks an individual’s online history to decide which ads or news items to push.
As Benjamin notes, however, the algorithms underlying deep-structured learning are created by humans whose experiences and outlooks can factor into decisions they make in writing the code that “instructs” the machine-learning process, with the lack of diversity in Silicon Valley well known. Machine-learning technologies also draw their inputs from large preexisting data sets that themselves are permeated with racial, economic, and gender bias.
The databases from which Project Green Light draws are no exceptions. People of color are over-represented in criminal-offender databases. Couple that with facial recognition software’s tendency to misidentify people with darker skin tones and it’s not surprising that errors such as the Oliver and Williams cases occur.
Computer systems, Benjamin observes, are part of the larger system of structural racism. The Stop LAPD Spying Coalition, in its 2018 report “Before the Bullet Hits the Body,” uncovers the bias built into what the coalition terms “the architecture of surveillance” of the Los Angeles Police Department’s crime-fighting system. Algorithms access historical crime data by neighborhood to identify trends and map potential crime hot spots. The information is then used to develop “Chronic Offender Bulletins,” which identify “persons of interest” for surveillance and potential intervention. Surveys conducted by the Stop LAPD Coalition reveal more intensive policing in some neighborhoods which tend to have higher populations of people of color than others, skewing the data for modeling purposes.
Community members in Detroit, including those in DSA, have expressed concern over the adoption of similar technologies by DPD. As reported by the Detroit MetroTimes, DPD has invested more than $1 million in facial recognition software with little to no public input. When questioned, City of Detroit spokesman John Roach is quoted as saying: “The only time it is used is after a crime is committed and investigators go back through video to pull a still image of a suspect to try and determine his or her identity.” But given the demonstrated tendency to misidentify people of color and Detroit’s status as an overwhelmingly Black city, that response might offer little comfort.
People of color are indistinguishable to some technologies while being overly visible to others, as evidenced by their misidentification by facial recognition technology and their over-representation in police databases. As Benjamin notes: “Who is seen and under what terms holds a mirror onto more far-reaching forms of power and inequality.”
In 2016, the Georgetown University Center on Privacy and Technology released a study of more than 100 police departments across the country to determine the extent of facial recognition and its community impact. Key findings were: The law-enforcement facial recognition network includes more than 117 million Americans. The technology disproportionately affects Blacks. Police are experimenting with real-time facial recognition that would potentially allow the surveillance of people on the street without their knowledge.
Even more troubling are the findings about the unregulated nature of police surveillance. Most law enforcement agencies are doing little to ensure that the information they gather is accurate. At the time the study was completed, there were no state laws regulating police facial recognition, no requirements to obtain warrants, and no limits to their application.
The Center further identifies a serious risk to free speech, given the history of police and other government surveillance of civil rights protests. Worst of all, many agencies — including, it appears, DPD — are keeping critical information from the public under the rationale that doing so might help the “bad guys.”
Concern over Project Green Light, and in particular facial recognition, emerged among some in the community not long after the system began expanding. The ACLU, the Detroit Digital Justice Coalition, the DSA, and individual residents voiced these concerns at public venues such as the Detroit City Council Public Health and Safety Committee, the Detroit Board of Police Commissioners, and District Neighborhood Meetings.
In June 2019, the Detroit Digital Justice Coalition released a study, “A Critical Summary of Detroit’s Project Green Light and its Greater Context,” that has been widely circulated. As a result, the Detroit Board of Police Commissioners passed a policy in 2019 for the use of facial recognition software. Under the new policy, DPD cannot use facial recognition for live feeds or immigration enforcement, nor can they use the software on mobile devices.
There are additional safeguards, including processes for requesting investigations using facial recognition and reporting to the Board. DPD is complying with the directive to report facial recognition investigations to the Board of Police Commissioners. The information consists of aggregated statistics with no narrative regarding specific investigations. Many of the images appear to be coming from social media with a high degree of inaccuracy. In June, Detroit City Councilwoman Mary Sheffield introduced the Community Input Over Government Surveillance Ordinance to further regulate DPD activity. As of this writing, Council has not acted on the ordinance.
What Is to Be Done?
Given the documented concerns over police surveillance technology, the Metro Detroit DSA joins other members of the community in calling upon the City of Detroit to immediately abandon Project Green Light and reallocate its funding to community-based programs and resources that can have a more positive and lasting effect for all Detroit citizens and the city’s revitalization, including housing, education, and healthcare. For more information, visit Metro Detroit DSA’s Defund the Police Campaign webpage.
Vince Carducci is a cultural critic, essayist, and Dean Emeritus at the College for Creative Studies, Royal Oak, Michigan. This essay was originally published by The Detroit Socialist.
Informative and prompting a deeper investigation of both the technology and the human factors in the detection of ‘persons of interest’ beyond bias.