Last week, RIT philosophy professor and expert on the ethical and privacy implications of technology, Evan Selinger, spoke to a group of lawyers in Rochester, New York, about the dangers presented by facial recognition software. The presentation, “Who Stole My Face? The Privacy Implications of Facial Recognition Technology,” was hosted by the committee that I chair for the Monroe County Bar Association, the Technology and Law Practice Committee, and was the brainchild of committee member Aleksander Nikolic, a Rochester IP attorney.
During his talk, Selinger contended that facial recognition technology should be banned across the board until regulations are enacted that are designed to control when and how it is used, and by whom. As he explains in a recent New York Times Op-Ed that he coauthored, facial recognition technology is unique in its invasiveness and in its potential for causing harm:
Facial recognition is truly a one-of-a-kind technology — and we should treat it as such. Our faces are central to our identities, online and off, and they are difficult to hide. People look to our faces for insight into our innermost feelings and dispositions. Our faces are also easier to capture than biometrics like fingerprints and DNA, which require physical contact or samples. And facial recognition technology is easy to use and accessible, ready to plug into police body cameras and other systems.
According to Selinger, the use of facial recognition technology by law enforcement is particularly problematic due to its invasiveness and increasing pervasiveness. In that same article, Selinger outlines the risks presented when law enforcement officers seek to use facial recognition tools as part of their investigatory, screening, and crime prevention arsenals:
The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.
Another particularly concerning issue with facial recognition technology is that its underlying programming often results in biased outcomes that can have life-altering effects for those being screened by it. For example, as explained in an ACLU blog post on the issue, a study conducted by the ACLU revealed bias in the programming behind Amazon’s facial surveillance technology, Rekognition.
In the study, the software was used to compare photos of members of Congress to mugshots of people who had been arrested for a crime. Rekognition incorrectly identified 28 matches between members of Congress and the mugshots. As explained in the blog post, some members of Congress were affected by these errors more often than others:
The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance.
The same software has also been shown to have a gender bias and has incorrectly identified women as men.
Because of these issues, some lawmakers are fighting back and are introducing bills designed to combat the bias inherent in facial recognition software. For example, in October, U.S. Congressional Representative Brenda Lawrence announced her plan to introduce legislation that would mandate the study of the racial biases found in facial recognition systems. And, in July U.S. Congressional Representative Rashida Tlaib introduced the “No Biometric Barriers Act of 2019,” which proposed a ban on the use of facial recognition technology at housing units funded by the Department of Housing and Urban Development, largely due to bias concerns.
Similarly, four cities have already imposed bans on the use of facial recognition tools by law enforcement including San Francisco, Somerville, Berkeley, and Oakland. And a statewide ban is in the works in California.
In his article, Selinger contends that the legislation passed thus far is a step in the right direction, but more drastic measures are required in order to combat the threat posed by the use of facial surveillance software by law enforcement and other public entities:
We support a wide-ranging ban on this powerful technology. But even limited prohibitions on its use in police body cams, D.M.V. databases, public housing and schools would be an important start.
Of course, the likelihood that far-reaching bans will be imposed prior to facial surveillance becoming ubiquitous is minimal. Let’s face it, the genie’s already out of the bottle and the legislative process tends to move at a snail’s pace, while technology is advancing at rates never before seen.
Facial recognition technology is already so pervasive that it’s going to be incredibly difficult to unring that bell. The implications of our newfound reality are already quite apparent and many assert that facial recognition technology is being misused by public and private entities alike. For evidence of that trend, you need look no further than the $35 billion class-action lawsuit currently pending against Facebook based on its alleged misuse of facial recognition data.
Who knows what extremes we’ll go to camouflage ourselves in a world where facial surveillance is the norm? There are already lines of clothing and other devices being released that are designed to confuse facial surveillance technology. No doubt there’s more of that to come. In fact, the very thought gives the 1997 movie Face/Off newfound relevance. No wonder there’s a reboot in the works.
The bottom line: The future is already here, folks, and we’re all hapless participants in this reckless social experiment. Welcome to our newfound reality.
Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at niki.black@mycase.com.