Did you miss a session at the Data Summit? Watch On-Demand Here.
A plurality of Americans support the widespread use of facial recognition by law enforcement to monitor crowds and track down people who might have committed a crime. That’s one of the surprising findings from a new Pew survey on U.S. adults’ views of AI, which touched on topics including the use of AI by social media platforms to find misinformation and the development of AI-powered autonomous vehicles. Of those responding to the survey, nearly half say that they’re equally concerned and excited about AI, with those in support believing that AI’s ascent will transform industries and detractors expressing concerns about privacy and job loss.
Facial recognition, which has long been a flash point for controversy, reentered the public debate after the killing of George Floyd in May 2020. A 2021 report from the Government Accountability Office revealed that six federal agencies applied facial recognition to images of the ensuing protests, including the U.S. Park Police, which used a photo from Twitter to charge someone with felony civil disorder and two counts of assault on a police officer.
Despite an increasing number of bans on facial recognition at the local and state levels and a pledge from tech giants including Google, Amazon, Microsoft, and IBM not to sell access to the technology, governments — including the U.S. — continue to adopt facial recognition under the guise of maintaining law and order. In Detroit, which began piloting facial recognition software in 2017, police in 2020 used the technology to conduct upwards of 100 searches of suspects. Vendors like AnyVision and Gorilla Technologies are alleged suppliers for Taiwanese prisons and Israeli army checkpoints. And startup Clearview, which has scraped over 10 billion mugshots from the web to develop its facial recognition systems, claims to have 3,100 law enforcement and government customers, including the FBI and U.S. Customs and Border Protection.
Public opinion on facial recognition
Pew’s report, which surveyed 10,260 U.S. adults in early November 2021, found that roughly one-third — 34% — think the widespread use of facial recognition by officers would make policing more fair, despite evidence to the contrary. On the other hand, a majority — 57% — say that if facial recognition deployment by police were to become more common, crime rates would stay about the same. Moreover, 66% say that police would definitely or probably use facial recognition to monitor Black and Hispanic neighborhoods much more often than other neighborhoods (although Black and Hispanic adults are more likely than white adults to say this).
Recent history is filled with examples of facial recognition abuse, including software developed by Huawei that can reportedly recognize the face of a member of the Uyghur minority group. (The Chinese government continues to target Uyghurs, who it accuses of subversion, imprisoning as many as two million in interment camps throughout the country.) At least three people in the U.S. — all Black men — have been wrongfully arrested based on poor facial recognition matches. Overseas, the facial recognition technology used by the U.K.’s Metropolitan Police in 2019 was found to be 81% inaccurate, mistakenly targeting four out of five innocent people as wanted suspects.
Interestingly, only 53% of U.S. adults responding to the Pew survey say false arrests would “probably or definitely” be made if use of facial recognition technology was widespread among police. Black respondents were nearly three times as likely to predict false arrests compared with White respondents, while Hispanics were close to twice as likely.
In the U.S., facial recognition use by police is likely to inflict particular harm on Black Americans, writes Alex Najibi, a Ph.D. candidate studying bioengineering at Harvard’s School of Engineering and Applied Sciences. Black Americans have a higher chance of being arrested and incarcerated for minor crimes than white Americans, he notes, and consequently, Black people are overrepresented in mugshot data — which facial recognition employs to make predictions.
“The Black presence in such systems creates a feed-forward loop whereby racist policing strategies lead to disproportionate arrests of Black people, who are then subject to future surveillance,” Najibi wrote in a 2020 blog post. “For example, the [New York Police Department (NYPD)] maintains a database of 42,000 ‘gang affiliates’ — 99% Black and Latinx — with no requirements to prove suspected gang affiliation. In fact, certain police departments use gang member identification as a productivity measure, incentivizing false reports. For participants, inclusion in these monitoring databases can lead to harsher sentencing and higher bails — or denial of bail altogether.”
Difference in views
Among the Pew survey respondents who believe facial recognition would be a force for good in police’s hands, the majority said that it would result in “more missing persons being found by police,” better crowd control, and “crimes being solved more quickly and efficiently.” There’s some reporting that supports this — Indian police in 2020 used a facial recognition app to reunite missing children with their families, according to Reuters — but even organizations embracing facial recognition for these purposes advocate regulatory guardrails or frameworks curtailing the technology’s use.
The survey explores this, with Pew finding that “substantial shares” of the respondents would find police use of facial recognition more acceptable if “certain conditions” were met — like training officers in how the technology can make errors in identifying people. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies’ biases can be exacerbated by misuse. A report from Georgetown Law’s Center on Privacy and Technology detailed how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The NYPD and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches.
As Wired’s Khari Johnson reports, some police departments have adopted policies governing their respective uses of facial recognition. In Detroit and New York, two analysts must review the results of a facial recognition scan before the results are turned over to detectives, and facial recognition alone can’t be used to justify an arrest. But best practices aren’t always followed. Clare Garvie, a former senior associate at Georgetown’s Center on Privacy and Technology, told Johnson that some law enforcement analysts in Nebraska and Florida were allowed to specify a lower facial recognition accuracy rate to find matches in police databases.
When it comes to other applications of facial recognition that doesn’t involve law enforcement, like enhancing credit card payment security and apartment building tracking, over half of people told Pew that they favor the use of the technology (outnumbering those who approve of its use by law enforcement). Conversely, over half oppose social media sites like Facebook automatically identifying people in photos and companies tracking the attendance of employees.
The public’s stances on facial recognition are likely to evolve further as the technology becomes commonplace — absent regulations. The Internal Revenue Service this year adopted — then backed away from — a plan to force taxpayers to use facial recognition software before they could gain access to certain online services. Government Accountability Office reports that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems. And by the Georgetown study’s estimates, half of American adults’ faces are already in law enforcement’s facial recognition databases.
According to documents obtained by The Washington Post, Clearview AI is telling investors that it’s on track to have 100 billion facial photos in its database within a year — enough to ensure “almost everyone in the world will be identifiable.”
“Notable portions of people’s lives are now being tracked and monitored by police, government agencies, corporations and advertisers … Facial recognition technology adds an extra dimension to this issue because surveillance cameras of all kinds can be used to pick up details about what people do in public places and sometimes in stores,” the coauthors of the Pew study write.