Surveillance technology is here.

What does this mean for you?

History was made this summer When a man was arrested based on facial recognition alone

In June 2020, the first arrest using only evidence from Facial Recognition (FR) technology occurred in Detroit, Michigan. Five watches were stolen from a Shinola store. The police reviewed security footage from the incident and zoomed in on the perpetrator to capture an image of him. Despite the shoddy image quality, they ran it through the police department's FR software, which provided them with a potential suspect. Officers pursued this suspect without any further investigation. Police arrived at this man's house and arrested him in his front yard without questioning; he was not provided any information about what was happening as his family and neighbors stood by and watched. He was detained for 30 hours and was eventually released on bail until his court hearing.

The man arrested lived 25 miles north of Detroit, had a solid alibi, and didn't own any of the clothing that the person in the security footage was wearing. Without the FR software's output stating he was a match, there were no indications that he could have possibly committed this crime.

The charges were later dropped due to insufficient evidence. However, a person was still wrongfully accused of and arrested for a crime based on erroneous identification from security footage analysis. While some may argue that FR technology is simply a tool used to aid police in finding leads, that explanation is insufficient when innocent people's lives and dignities are on the line. The Michigan State Police's Investigative Lead Report, shown above, clearly states that the document contains a possible lead to be investigated, and that it is not probable cause for arrest. No matter how clearly stated, bolded, and underlined this guideline is, it was not followed.

This software is still presently being used in Detroit.

How could this wrongful arrest happen? Surveillance technology.

Surveillance technology takes many forms. Visual, audio, textual, biometric, or location data can all be collected by technology with the purpose to surveil us. In the case discussed above in Detroit, this initiative is called Project Green Light.

Project Green Light

Project Green Light is a community policing effort in the city of Detroit, Michigan. Cameras are installed throughout the city at designated locations and have a direct feed to the Detroit Police. This system has been marketed to the citizens of Detroit as a way to prevent crime in the name of "public safety".

Project Green Light partners with local businesses to set up cameras at their locations with accompanying Project Green Light signs. The cameras allow the Detroit Police Department to have a live stream of the city from many different viewpoints, and in turn, offer the businesses extra support from the police and a potential crime deterrent with the branded sign. Police “virtually patrol” each affiliated location once a day and physically patrol around 184 affiliated locations each day.

Like many police surveillance projects before it, it appears that Project Green Light disproportionately surveils Black residents of Detroit. We overlaid a map of racial demographics in Detroit on top of Project Green Light's map of its camera network. Each camera is represented by a large dot. Small, color-coded dots denote different race demographics. Note that the cameras are in predominantly Black and Hispanic neighborhoods.

Surveillance technology and predictive policing largely lead to the increased overpolicing of Black and brown populations, which already suffer from discrimination from law enforcement. According to the Vera Institute of Justice, even though Black men make up only 13% of the US population, they account for 35% of people incarcerated. Furthermore, 1 in 3 Black men born today can expect to be incarcerated in their lifetime. For Latino men, it's 1 in 6. For white men? It's 1 in 17.

Clearly, the placement of these cameras in majority Black and Hispanic neighborhoods (according to US Census data) means Project Green Light has significant potential to contribute to this systemic overpolicing problem.

Facial Recognition Technology

FR technology is a part of Project Green Light. The project uses a database containing 8 million criminal images and 32 million DMV images. Almost every resident of Michigan is represented in this database. You are likely represented in a database just like this without your knowledge and consent.

There are no standards that FR technology has to meet to be released on the market. This means that this technology can identify people incorrectly 98% of the time and still be considered a reliable tool for law enforcement. In fact, this is exactly what's happening.

These systems especially fail at correctly identifying people of color, — specifically Black people — which is problematic in a city whose population is approximately 80% Black. Under Project Green Light, this FR technology is likely to misidentify the majority of people whose images are captured, leading to false charges and arrests that disproportionately affect Black residents of Detroit.

How FR Tech Works

FR technology works by learning highly individualized traits from a large collection of face photos. Such traits can range from the distances between so-called "face landmarks" — e.g. eyes, nose, mouth — to extremely subtle traits expressed on a handful of pixels that, in theory, distinguish one person from all others.

Try it yourself: below you can experiment with a webcam-based face tracking widget that works within your browser window. If you allow this page to access your webcam, you can see how it tracks your face with a bounding box. By checking the option "Detect Face Landmarks" you can see how it detects the specific traces of your mouth, nose, eyes, and eyebrows, as well as the contour of your face.

To identify a specific person, FR tech relies on labeled photos. The more photos of an individual, the more accurate the software is. For this reason, uploading labeled photos — or entire albums — to social media substantially improves computers' ability to identify specific individuals. For the same reason, the use of a single photo such as the one in a driver's license can lead to inaccurate matches.

It's worth noting that FR tech has benefited tremendously from recent advances in an area of Machine Learning called "Computer Vision", where computers are taught to recognize real-world entities, more generally, within images. In this graph we can see how much progress has been made on object detection in ImageNet, the most traditional open data set used by the Computer Vision community.

In 2011, algorithms would miss 25% of objects in ImageNet; four years later, in 2015, this error rate dropped below 5%, which is about how well humans perform in this task. This period coincides with the emergence of Deep Learning, a branch of Machine Learning that leverages ever-greater computing resources and larger data sets to train increasingly accurate models — so long as the data is sufficiently accurate (!).

The FR you just tried above is a product of Deep Learning. It was trained on over 14,000 photos manually labeled with bounding boxes and the resulting model is only 190KB.

This is happening in Detroit. How does it affect me?

Surveillance technology is actually implemented all over the country, with very little publicity or oversight. While complete statistics have proven difficult to come by, there are examples of cities in California, New York, Illinois, as well as partnerships between the FBI and at least 25 states involving some form of FR (see map below). Due to the current regulatory vacuum, there's little transparency in how FR has been effectively used for surveillance. We don't know if images of our faces are currently in a database somewhere, or who has access to them.

Moreover, you don't need to consciously hand over your data to have your face wind up in an FR database — Clearview AI is a facial recognition company that scrapes the web for its face photos. Most notably, the company pulls photos from blogs and social media sites like Facebook, Instagram, and YouTube, amassing more than 3 billion photos without the subjects' knowledge. Clearview AI's CEO recently announced that the company has contracts with more than 2,400 police agencies.

Last year, hackers were able to steal a database of traveler photos from a Border Agency, and the Customs and Border Protection has declined to disclose much information on the extent of this breach and the people impacted by it. This is particularly problematic because face photos are unlike any other biometric data: as a very salient trait, way beyond identity theft,  face images can be unprecedentedly effective in retrieving one's digital breadcrumbs. In a world where police officers have a history of abusing much less powerful tools to spy on exes, neighbors, and journalists, this rapid spread of FR tech represents a risk to all citizens.

Is there any legal protection against surveillance tech?

While there are no regulations or user protections on the federal level, a few states are taking action to protect their citizens. Checkout the map below to explore where FR technologies are being used in the US, and where people and local governments are taking legal action. What's happening in your area?

What can I do as a regular citizen to protect my community?

We may feel small, but we form the public opinion. If enough people are vocal, we can demand our legislators take action and protect our privacy. The right to privacy is critical for everyone, but especially for vulnerable communities that already suffer from racial injustice in policing.

Check out what actions are being taken in your city and state on the map above. You can also sign this petition to urge congress to ban the use of FR in law enforcement.

In the meantime, new tools have started to emerge to help each of us resist FR technology. Recently, a team at the University of Chicago released a desktop app, called Fawkes, that allows anyone to "poison" their own photos by making changes that are imperceptible to humans, but destructive to Deep Learning models. If these "poisoned" images are added to databases for FR algorithms to learn from, they can confuse the FR algorithm's understanding of the patterns that represent a person's face. In other words, your photos can become "Trojan horses to deliver that poison to any facial recognition models of you". In this vein, Project Fawkes — as in the Guy Fawkes mask — serves as an individual form of resistance while a collective fight for regulation is underway.

The world of FR is evolving, but so is our response to it. As more people learn about the dangers of surveillance technology and FR, we can gain more power in our movement to protect ourselves and those we care about.

The issue of surveillance technology has not been given the attention we believe it deserves, so please share this page with your family and friends to spread the word.

This was written by graduate students at Northwestern University. If you have any questions or feedback, please feel free to reach out to us.