7.9 C
New York
Thursday, April 18, 2024

Police use of facial recognition technology must be governed by stronger legislation

 

Police use of facial recognition technology must be governed by stronger legislation

Courtesy of Joe Purshouse, University of East Anglia and Liz Campbell, Monash University

Automated facial recognition technology has been used at a number of crowd events in England and Wales over the past two years to identify suspects and prevent crime. The technology can recognise people by comparing their facial features in real time with an image already stored on a “watch list”, which could be from a police database or social media account.

Such technology is becoming increasingly popular for police forces around the world. Where successful, it can have positive and headline-grabbing effects – for example tracing missing children in India. But facial recognition technology is controversial, with research showing that it can be inaccurate and discriminatory. San Francisco is even considering a complete ban on its use by police.

Several British police forces have ongoing facial recognition trials. Our new research into the legal challenges posed by the police use of facial recognition technology suggests that, from the data made publicly available, arrest rates are low and far outweighed by the number of incorrect matches made in live public surveillance operations. This creates a risk that innocent people may be stopped and searched, which may be a daunting experience.

Such trials are also costly. South Wales Police received a £2.6m government grant to test technology, and, so far, the Metropolitan Police has spent over £200,000 on its on-going trial.

The police have also been criticised for questionable practices in the use of facial recognition technology. The Metropolitan Police built and used a watch list of “fixated individuals” on Remembrance Sunday in 2017. Reports suggest these people were identified, in some cases, on criteria relating to their mental ill-health, raising concerns that the technology was used in a discriminatory manner.

In June 2017 at the UEFA Champions League final in Cardiff, South Wales Police reportedly deployed facial recognition technology using low-quality images provided by the football governing body, UEFA, and the system produced more than 2,000 false positive matches. Its accuracy improved in subsequent deployments, but false positive matches still frequently outnumber successful identifications.

Impact on human rights

When justifying their use of facial recognition technology in terms of its effectiveness in crime control and prevention, senior police figures tend to suggest they are mindful of human rights concerns, and that their deployments of the technology are lawful and proportionate. However, the courts have not yet tested these claims, and parliament has not debated the appropriate limits of this technology by the police.

Facial recognition technology breaches social norms of acceptable conduct in public space. When in public, we might expect to be subject to a passing glance from others, including police officers. But we expect to be free from sustained or intensive scrutiny, involving cross-referencing back to our social media feeds. Facial recognition technology allows the police to extract such personal information from us, and use this information in ways we cannot control.

The limited independent testing and research that has been done so far into facial recognition technology indicates that numerous systems misidentify ethnic minorities and women at higher rates than the rest of the population. South Wales Police has suggested, without publishing a detailed statistical breakdown, that its system does not suffer from these drawbacks. Despite calls for rigorous testing on the performance of facial recognition system from the scientific community, the Metropolitan Police has not published how its system has performed relative to the gender, ethnicity or age of those subject to its use.

This creates a risk that minority groups, who are already arrested at much higher rates than white people, will be further over-policed following false positive matches.

Need for tighter regulation

As questions over its accuracy remain, it’s too early for the police to be using facial recognition technology surveillance in live policing operations. Accuracy isn’t the only issue with the technology though, and as it improves it’s important to think about how facial recognition technology should be regulated. While police deployments of facial recognition technology must comply with the Data Protection Act 2018, and the Surveillance Camera Code of Practice, these legal regimes don’t provide guidelines or rules specifically regulating its use by the police. As a result, the regulatory framework gives little indication or guidance about the proper threshold at which inclusion on a watch list is lawful.

In their trials, police forces have been collecting, comparing and storing data in different ways. In 2018 the UK’s Information Commissioner expressed concern about the absence of national-level co-ordination and a comprehensive governance framework to oversee facial recognition deployment. Most images used to populate watch lists are gathered from police databases, often from when people are taken into custody. There is a particular risk that people with old and minor convictions, or even those who have been arrested or investigated but have no convictions at all, may find themselves stigmatised through facial recognition surveillance.

Given the impact of facial recognition technology on human rights, its use by police should be limited, focusing only on serious crimes or threats to public safety, rather than being used as pervasively as public CCTV currently is. Inconsistent practices between police forces also suggest the need for a narrower regulatory framework. This should keep the size of watch lists small and improve the quality requirements of technology systems and the way images are compiled and stored for watch lists. As some police forces have already begun to embrace facial recognition surveillance, legislators must keep pace so that human rights are respected.

Joe Purshouse, Lecturer in Criminal Law, University of East Anglia and Liz Campbell, Francince McNiff Professor of criminal jurisprudence, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

157,357FansLike
396,312FollowersFollow
2,290SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x