Research and action for fair and accountable policing

About Twitter Instagram Facebook Donate
03.05.2023

Facial recognition policing: It’s not just about improving algorithmic inaccuracies

With a focus on the use of private technology companies and operational procedures, PhD researcher Tyler Dadge highlights several key questions that are yet to be answered concerning the police's use of facial recognition, and argues that algorithmic accuracy is just one issue amongst many when it comes to police use of this tech

On April 5th 2023, the National Physical Laboratory (NPL) released their findings on the study they conducted into the accuracy of NEC’s Neoface V4 (click here for report); the facial recognition software both South Wales Police and the Metropolitan Police (the Met) use during their deployments of Live Facial Recognition (LFR) Retrospective Facial Recognition (RFR) and Operator Initiated Facial Recognition (OIFR). NPL found that RFR and OIFR had a True Positive Identification Rate (where an unknown face is correctly matched with a known face in the database) of 100%, and that there was no significant gender or ethnicity difference in this rate. In simple terms, the algorithm could match faces correctly 100% of the time and your gender and/or ethnicity had no real effect on how accurate the algorithm was. This finding is positive and does address some of the concerns people have about the police using facial recognition technology. However, it is worth noting that not all the findings in the report were this positive, and the technology still has some way to go before it is also 100% accurate. LFR, for example, is still only accurate 89% of the time.

Let’s imagine we live in a world in which whatever facial recognition algorithm, in whatever setting the police use is never inaccurate (we probably aren’t far from this being reality). On the surface, it could make police use of facial recognition seem fairly appealing and unproblematic. Yet there are still some concerns that have nothing to do with the accuracy of the algorithms and can be seen in many other areas within policing.

Private technology companies

Private companies providing services to police forces aren’t new: the forensic sciences, the Police National Database (PND), and police drones are just a few examples. They each have their problems, but one overarching problem is the dependency on these private companies, and this will be no different for the police’s use of facial recognition. Once we create a dependency on facial recognition, and the company that provides that system, we’re in the hands of a commercial market. As with the PND, access to most of these services requires the payment of licensing fees, which are set by the technology companies. If these license fees exceed what is deemed acceptable by way of public funding, and the decision to stop using the facial recognition system is made; what happens to our data? Do the facial images and the recordings of deployments belong to the police or the technology companies? Are there already agreements in place regarding the data protection rights of our images? If so, what are they? Will these agreements be regulated? Will there need to be new regulations created to meet this need? Can the algorithm that is used to apprehend a criminal be known as part of the right to a fair trial, or are they bound by trade secrets? Are private technology companies allowed trade secrets when their product is being used by the police? The police haven’t answered any of these questions yet, nor have they made any attempt to have a public discussion about these concerns.

Operational procedures

Facial recognition algorithms do not operate in a vacuum; they exist as part of policing procedures. Yet there is a lack of clarity as to what this policing process is, and often varies across police forces. Facial recognition provides a wide range of assistance in policing, it can help provide intelligence in an investigation, but it can also help officers identify and arrest wanted criminals. There has been no national agreement about how facial recognition is used, and why it is being used, nor are there plans to have relevant discussions about this with many police forces taking different stances. This is a vital discussion that needs to be had and is arguably more important than ensuring algorithmic accuracy. Without a clear understanding of what the police want to achieve from their use of facial recognition, we can’t begin to understand what our rights as citizens are. It also makes it hard to understand how facial recognition can be used in line with policing principles, especially concepts like proportionality. Do I have the right to refuse to walk past an LFR camera? (The Met have fined people for covering their faces when passing these cameras). Can I request another method of identification to be used? For example, fingerprinting. Can I request information about the images that were used to make a match about me? For example, from where are images obtained, who provides these images, and how old these photos are? What happens to my data if no further action is taken? What if data is out of date, and I’m wanted for an old offence? Again, the police have not satisfactorily answered these.

The questions that have been raised here are just some of the questions that have not been answered or seem not to have been carefully thought about or publicised, and worryingly, don’t seem to be on any agenda for addressing. But most importantly, these questions have nothing to do with algorithmic accuracy. The way the Met presented (in this tweet) the NPL’s findings as the reason to justify their use of facial recognition demonstrates that they don’t fully understand the scope of the public’s concerns around the police’s use of facial recognition technology. Algorithmic accuracy is a valid concern which needs addressing, and continual monitoring, but to assume that this one concern is the primary concern simplifies a complex issue that has many different aspects that need consideration and public discussion. Simplifying the police use of facial recognition down to the accuracy of the algorithm does little to help build trust and support around the police’s use of facial recognition; especially during a period when police trust is in short supply. There has been little research done into the police’s use of facial recognition technology that would help to address some of these questions raised, but with a police force that seems focused on the technology itself, rather than how this will change policing, it seems that these questions could remain unanswered for a while.


All blogposts are published with the permission of the author. The views expressed are solely the author’s own and do not necessarily represent the views of StopWatch UK.

For more information on facial recognition technology and its use by the police, check out Big Brother Watch’s ‘Stop facial recognition’ campaign and Liberty’s webpage ‘What is facial recognition and how do we stop it?’

About the author

Tyler Dadge.

More articles

import_4_facialrecognition.jpg

Support our work

We take one-off donations and regular payments. Any amount we receive helps to support us in our mission and keeps us independent

Donate

Sign up to our newsletter

For regular updates on our activities and to learn how you can get involved with us

Sign up to our newsletter

Subscribe