Image credit: DF/Public Seminar


Patrick K. Lin’s book Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System discusses the urgent need to scrutinize how artificial intelligence is being used in our criminal justice system. Without public and private oversight, these tools can perpetuate biases that most negatively impact minorities and give law enforcement power that can and has been easily abused. Public Seminar intern Miko Yoshida recently talked with him to discuss his book.

Miko Yoshida [MY]: Your book takes the reader through a journey where you attempt to explain the otherwise complex concepts behind artificial intelligence (AI) and its impact on the criminal justice system. You explain how biases can be baked into our data and systems, and how minorities are the ones that suffer the most. You call attention to the fact that although these biases are known and documented, public and private sectors fail to account for it.

First, can you go over a few of the terms we should understand? AI, algorithms, machine learning?

Patrick K. Lin [PKL]: AI refers to machines capable of learning, reasoning, and acting for themselves. Algorithms are a series of instructions that are followed step by step to complete a task or solve a problem. Machine learning is a subset of AI where the computer is creating its own algorithms instead of being programmed by humans.

Nowadays, the main purpose of an algorithm is automation. By automatically performing a task, processes have become more efficient and machines are able to process more information with little to no human intervention.

What sets AI apart from more analog machines, which are only capable of mechanical or predetermined responses, is that AI algorithms are designed to make decisions based on data that is provided.

I just wanted people to recognize that just because something is scientific or mathematical or technical, that it isn’t inherently good or fair or objective.

MY: You write that in the criminal justice system, an algorithm can be used for criminal risk assessment, which can impact bail and sentencing. A judge could be lazy and just go with what the machine recommends or justify their decision with the output. And if people want to contest it, they can’t access the technology because of trade secrets or it being intellectual property.

PKL: Right, as an example, when we created and developed autopilot for commercial flights, we didn’t suddenly start sending passengers into the sky with no one in the cockpit. We still have two pilots, and they might not be in control a hundred percent of the time, but they’re still making decisions, and especially when there’s turbulence, or there’s some sort of anomaly in the flight, you expect the pilot to take over.

We have this legal system where supposedly we’re supposed to be able to face our accusers, where if you’re being accused or alleged for doing something, that you have the opportunity to refute it.

MY: Let’s talk about these algorithms more specifically. Walk us through what you were arguing in the book regarding what we need to be careful of.

PKL: I would say biases seep into these systems in a lot of different ways. I think the one that I really focus on for the bulk of the book is the reliance on historical data and not scrutinizing the context and the baggage that comes with that historical data. In the case of sentencing algorithms, for example, or criminal risk assessments, what you have is, “All right, we’re going to look at historically how we’ve sentenced different people,” and things like socioeconomic status will be used, zip codes, things like that. Oftentimes, especially in public-facing algorithms, race won’t be explicitly used, but there are proxies that exist. So all of a sudden you have algorithms that will be more punitive towards people who are lower income, who might be unemployed, or who live in certain parts of the city.

Throughout our history, we’ve seen a very systemic and very deliberate policing strategy that’s meant to break down and destroy already marginalized communities, especially Black and Brown communities. So we’re going to see those same data points being reflected in systems that are used to train AI and algorithms. And if we’re not thinking critically about it, we’re just going to see those same cycles repeated over and over again. It’s just that now we’re calling it scientific, we’re calling it mathematical, we’re calling it technical, and it’s still going to be the same problems.

MY: One of the more complex examples of AI you write about is facial recognition and how it’s being used. Facial recognition is essentially an AI that has used machine learning to create algorithms that help identify faces right? And the problem is that the historical data fed to the AI was already biased, so the result has issues like misidentifying people of color which leads to wrongful arrests. 

PKL: Right, there’s a lot of documentation and studies showing that this technology is the worst for Black and brown individuals. And so for same individuals who are being policed the most frequently, who are being discriminated against the most and targeted the most by police, the technology is also the most inaccurate for this group. Not only are you subjecting these communities to more policing and more violence, you’re also getting it wrong more often than not, and putting people in really unsafe and dangerous situations.

MY: How is this okay? Can you talk about how this is being controlled, regulated, or enforced?

PKL: The short answer is that it’s not being regulated, especially at the federal level. There’s no regulation. Neither the court system nor Congress nor any kind of regulatory body isare limiting facial recognition. And I think there needs to be a big push for that. I think it’s going to take a long, long time for there to be a general consensus at the federal level for any kind of meaningful laws or rules around controlling when facial recognition or AI-enabled systems are used, especially with respect to law enforcement, because the FBI, for example, uses facial recognition databases so, so frequently in the work that they do, that there’s no incentive I think to inhibit their ability to do what they do.

In more than half of the country, state-level unemployment agencies use ID.me as well to verify someone’s identity before they can claim unemployment benefits. Then when the facial recognition system [flags you as the wrong person or a criminal], they lock you out of your account. Now these can be very vulnerable people who need these benefits, and their payments have been frozen, and the appeals process is long and tedious. So without federal regulation, there’s still a lot of very serious harm being inflicted on people, especially [those] who rely on these benefits and government programs the most.

MY: Throughout this conversation, we’ve kind of been talking about a doomsday scenario where AI is evil, not really benefiting the private citizen, and just enriching the tech companies and allowing law enforcement to do whatever they want. But you also offer some hope towards the end of your book about what your proposed solutions are, where you think this is going, or where you want to see it going.

PKL: I do think there is actually quite a bit that can be done. I don’t want people to be skeptical of technology in and of itself, because technology does do a lot of really great things for us. I think what I want people to start asking is can we make this technology better serve us. Can we make technology do more to actually be exonerative rather than punitive? How can we remove biases or reduce the biases in algorithms that are used for different purposes?

MY: You preface this in your book and you also end with it, but my takeaway was that these issues go beyond technology. It’s more about the structure, and how we have built society in the way that we have, that goes un-scrutinized. A lot of times the individual feels helpless. So what can individual citizens do when this kind of feels like a very overbearing problem?

PKL: I think the first step is to try to take ownership of your own data. The next stage is finding other people who are critical. A lot of the movement is being done at the local level . . . organizing and finding advocates and activists who care about this issue deeply and following what they do and staying abreast to what is going on certainly in your own community and also communities at large and seeing what can we be doing to make things safer for people next door or people in our city, in our state. The big thing is we should be asking our technology and by extension our institutions to be better. And if we’re not asking those questions, we’re not going to demand those things.

We’re realizing that technology isn’t just about tech. It’s also about civil liberties. It’s also about humanity. It’s about being a part of a community, and our relationship to these different institutions. I want people who aren’t in the computer science or software engineering space to realize that they actually have a lot of say here.

There are so many amazing nonprofits and organizations that are doing really, really impressive work to really push back on this false sense of tech solutionism, that technology is inherently going to fix all of our problems. Maybe we shouldn’t be using technology in every single space.


Read an excerpt from Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System, courtesy of Patrick K. Lin and New Degree Press.


Patrick K. Lin is a New York City-based author focused on researching technology law and policy, artificial intelligence, surveillance, and predictive algorithms. While completing his law degree at Brooklyn Law School, he wrote an approachable and informative account of the intersection of technology, policy, and criminal justice in his best-selling book Machine See, Machine Do: How Technology Mirrors Bias in Our Criminal Justice System.

Miko Yoshida is an MA candidate at the New School for Social Research studying Creative Publishing and Critical Journalism.