Have UK Police Gone Too Far With Facial Recognition Technology?

London’s Metropolitan Police has recently completed the roll-out of a new system called Retrospective Facial Recognition (RFR). RFR uses artificial intelligence technology to analyze video footage, automatically identifying suspects and persons of interest for detectives.

How RFR works

The role of RFR is quite simple – detectives provide the software with photos of the people they are interested in, along with video from any source, such as CCTV or social media. RFR scans the footage – often hundreds of hours worth – to identify people who may match the specified targets. Detectives can then trace the movements and whereabouts of suspects, or build accurate timelines of the events leading up to a crime.

The police claim that RFR will help them analyse more video footage faster. This will allow detectives to follow up more leads, improve outcomes of investigations and to reduce the cost of policing.

Why are people concerned about Retrospective Facial Recognition?

According to Metropolitan Police, RFR should help to reduce crime and increase successful prosecutions based on the perceived success of another related technology – Live Facial Recognition (LFR). LFR works on a similar principle – the software monitors live CCTV feeds in real time, scanning for specific individuals who have been listed “of interest” to the police.

Download Panda FREE VPN

LFR is useful in principle, but it has already been banned in Wales where judges rules that its use was a serious infringement of personal privacy. The United Nations High Commissioner for Human Rights has called for countries to delay new LFR deployments because the technology is considered particularly invasive.

Civil liberties campaign group Big Brother Watch is also concerned about the accuracy of LFR and RFR. They point to several cases where people in the USA have been prosecuted and jailed based on misidentifications made by automated recognition software.

An ethical dilemma

The Information Commissioner’s Office (the body responsible for enforcing data protection laws in the UK) has reminded the Metropolitan Police that they must produce a Data Protection Impact Assessment (DPIA). This document outlines the potential risks to personal data and privacy and how the police will manage them.

The police insist that the new RFR technology will “ensure a privacy by design approach”. But without the DPIA, which has not yet been published, it is unclear exactly what this approach looks like. And so the concerns about privacy persist – particularly as the new software has been rolled out and is already in use in London.

There are also calls from politicians to draw up new legislation to govern the way LFR and RFR are used. They want to be rules that ensure personal privacy is protected – and that appropriate safeguards are implemented for the benefit of citizens who may be “caught up” in an automated investigation.

Police are convinced the system works, but this latest technology upgrade raises an important ethical question – how much of our personal privacy are we willing to sacrifice to help police love more crimes?

The post Have UK Police Gone Too Far With Facial Recognition Technology? appeared first on Panda Security Mediacenter.

Article Link: Have UK Police Gone Too Far With Facial Recognition Technology?