Wouldn’t you want to shorten the amount of time it takes to go through an airport, or allow concert venues to spot out possible dangerous people, or access a safe with only your face like in the movies? Airports, banks, Walmart, and even our own phones use facial recognition software for identification and security. With this technology that seemingly safeguards our most important information, why has the city of San Francisco banned it?
There are about 50 million surveillance cameras around the US in addition to the millions of cameras we all carry in our pockets (1). With the advancement of technology’s capacities and impacts, those cameras can turn into devices used for facial recognition. Facial recognition software can use video, photos, and real-time footage to detect a face, extract/translate those features into code to be compared to find a match. The databases used to compare the facial recognition software’s findings can come from voluntarily submitted photos users upload for security purposes. Software can also be used for comparing mugshots, driver’s license photos and passport photos. According to the Georgetown Center for Privacy and Law, approximately 117 million people’s photos are included in facial recognition databases (2).
The biggest user of facial recognition software is the government. Law enforcement agencies use facial recognition software for surveillance to find suspects in a crime and track down crime victims. Another example of police use of the technology was in 2015 to find and arrest protesters in Baltimore after the death of Freddie Grey (3). This use of facial recognition software was criticized for impeding on people’s First Amendment right to protest.
In 2019, San Francisco became the first city in the world to ban facial recognition software for local public entities. “It shall be unlawful for any Department to obtain, retain, access, or use: 1) any Face Recognition Technology; or 2) any information obtained from Face Recognition Technology”. The ban only reaches city government agencies, including the San Francisco Police Department (4). The ban does not apply to state and federal government agencies operating within the city, as well as private entities such as Facebook, Apple, and banking institutions.
San Francisco’s decision to ban the technology for local law enforcement agencies was to counteract the many issues with the technology, and its impact on certain communities. The provision cites inequalities in surveillance technology and their impact on civil liberties and civil rights. The bill’s sponsor, Aaron Peskin said its purpose is to “keep law enforcement from a burgeoning technology that has been blamed for inaccuracies — particularly when it comes to identifying minorities — and is largely unregulated in the United States.” (5)
In the case of Baltimore using facial recognition software to identify and arrest protesters after the death of Freddie Grey, a vast majority of the protesters were African-American, and they were exercising their right to protest after a police shooting of an unarmed black man. Many felt the use of the technology was another way for police to encroach on an already marginalized community and continue the breakdown of trust between the police and that community.
The technology itself has issues that impact certain communities. The software has many troubles with the recognition of women and people of color. MIT researcher, Joy Buolamwini, works on the issues with facial recognition software, particularly its weaknesses recognizing women and people of color. Her testing of different facial recognition software finished with 1% of error when identifying white men, but 30% of error identifying women of color (2). The programs also have problems properly identifying transgender people. These misidentifications from facial recognition software have serious, real-life consequences, including incarceration, deportation, and even death.
There are also many concerns with police departments’ use of the technology. Facial recognition software works best when comparing high quality photos with the correct lighting. Unfortunately, many police departments may not have access to those quality photos when searching for a suspect, so they use what they have access to, including sketches and celebrity look-a-likes (2). The technology cannot work properly for policing if these are the images used to identify an individual.
The main problem with facial recognition software and its infringement of civil liberties is that the technology is used without consent and without the need for probable cause. The 4th amendment requires probable cause for a warrant to be issued in order to search a person or their property. A person’s face falls under that guideline, yet there are no regulations as to when/for what reason law enforcement agencies can use facial recognition.
An example of how far facial recognition software could go in relation to civil liberties is China. China is one of the biggest implementers of facial recognition software. The government monitors citizens through some 170 million CCTV cameras. In China, facial recognition software enables citizens to withdraw money, check in at the airport, and pay for goods all with the scan of their face. Though much of the technology is used to streamline daily tasks, the technology is also used for intense surveillance by the Chinese government, recently being implemented in the persecution of muslim minorities. (6)
Law enforcement agencies have pushed back on the San Francisco ban, citing the technology’s use for better policing by using software instead of looking through hundreds of mugshot books in order to identify a suspect. It was useful for the FBI in 2014 when identifying a fugitive living in Nepal (7). It can also be useful for government agencies when identifying disaster victims, helping to inform and unify families (8). Though many law enforcement agencies argue for using facial recognition, many understand the negative impacts of the technology and welcome regulation and even a moratorium until the issues relating to misidentification can be resolved.
Many of the issues with facial recognition can be solved over time, including teaching the software to better identify women, people of color, and transgender people with more diverse data sets and more people from those communities contributing to the technology’s creation. There should be more regulation of the technology, including requiring consent and probable cause. Policing with facial recognition cannot be 100% effective unless departments create policies that dictate what types of images are being fed into the software and requiring more evidence to prove a person’s guilt. Facial recognition can be a great supplementive tool for convictions, but it should not be the sole form of obtaining evidence.
In addition to San Francisco, other governments in the US and around the world have also begun the process of regulating facial recognition software, but none to the extent of San Francisco’s ban. In 2008, Illinois passed the Biometric Information Privacy Act, which requires private entities to obtain explicit consent from the consumer before using their biometric data (including their face) (9). Illinois’ law only pertains to private entities and does not apply to the government. This law was used as a reference for the European Union’s General Data Protection Regulation (GDPR) which also prohibits the processing of personal data (genetic data and biometric data) unless “the data subject has given explicit consent to the processing of those personal data for one or more specified purposes”(10)
Oakland and the state of Massachusetts both have bills in the works, similar to San Francisco, that would ban the use of facial recognition software by government officials. A bill introduced to Congress by Senators Roy Blunt (R-MO) and Brian Schatz (D-HI) would also regulate the use of facial recognition software on a federal level.(Citation)
There are a number of unknowns with this new technology and San Francisco’s ban is just the start to a global conversation on the uses and impacts of facial recognition. I will be keeping an eye out for new developments and forthcoming legislation relating to it. Governments need to catch up to ever growing and evolving technology and the impact on citizens’ lives.
References:
1. Gonzalez, Oscar. “Millions of Surveillance Cameras Could Become AI Security Guards, ACLU Warns.” CNET, CNET, 13 June 2019, www.cnet.com/news/millions-of-surveillance-cameras-could-become-ai-security-guards-aclu-warns/
2. Clare Garvie et al., “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Center on Privacy & Technology, Georgetown Law, Oct. 18, 2016
3. Facial Recognition Technology- Hearing before Oversight and Reform Committee. House of Representatives, 116th Congress (2019)
4. San Francisco Board of Supervisors, Administrative Code – Acquisition of Surveillance Technology. 190110, Enacted 7/1/2019
5. Thadani, Trisha. “SF Could Ban Facial Recognition Software – Opinion Is Divided over Whether That’s Good.” SFChronicle.com, San Francisco Chronicle, 14 May 2019, www.sfchronicle.com/politics/article/SF-could-ban-facial-recognition-software-13842657.php.
6. The Economist, director. China: Facial Recognition and State Control | The Economist. YouTube, YouTube, 24 Oct. 2018, www.youtube.com/watch?v=lH2gMNrUuEY.
7. “Long-Time Fugitive Neil Stammer Captured.” FBI, FBI, 12 Aug. 2014, www.fbi.gov/news/stories/long-time-fugitive-neil-stammer-captured.
8. Broach, John, et al. “Use of Facial Recognition Software to Identify Disaster Victims With Facial Injuries.” Disaster Medicine and Public Health Preparedness, vol. 11, no. 5, 2017, pp. 568–572., doi:10.1017/dmp.2016.207.
9. Illinois General Assembly, Civil Liabilities: Biometric Information Privacy Act. (740 ILCS 14/) 2009
10. EU General Data Protection Regulation (GDPR): Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. Chapter II Article 9.
11. United States. Cong. Senate. Commercial Facial Recognition Privacy Act of 2019. 116th Cong. S.847.