Only a few years ago facial recognition technology seemed like science fiction but now it’s very much a reality. I mentioned in a recent blog a conference I went to about three years ago where the facial recognition technology I saw was OK but not quite right. Since then it’s improved significantly and is now being used by police forces and other organisations to protect the national infrastructure – at our borders and other places of significance such as power stations.
It was only in August this year that the Metropolitan Police used this technology at the Notting Hill Carnival – using a database comprising of individuals who were forbidden from attending the Carnival, as well as individuals wanted by the police who it was believed may attend the Carnival to commit offences. It was well publicised and one of a number of measures to keep revellers at the Carnival safe. Whilst it did pick up some press interest it was seen as a good use of technology.
As this technology advances I can see its uses grow rapidly beyond those of the police. Organisations using it to control access to buildings, advertisers using technology to identify a person’s gender and then change adverts on bill-boards accordingly and retailers using it to identify ‘high-worth’ shoppers to give them special treatment.
Facial recognition is based on the premise of matching faces against those stored on a database. These databases must be accurate and kept up to date and principle 12 in the Surveillance Camera Code of Practice covers this in detail. So, it worries me when I hear stories of organisations using facial recognition – where are they getting their images from. Equally, some police databases have images of people who have not been convicted of charged with a crime – are not a known criminal – why are these people on a database? Should they be?
Do people know they are on these databases or that automatic facial recognition is being used? That is what I find concerning. Generally people support the use of CCTV but when that camera is linked up to an algorithm matching their face to those on a database does that support wane? I think it probably does.
I’m by no means a Luddite and we should be using technology to keep people safe but only where it’s done in a transparent and open way. You only have to look at the backlash against the use of automatic facial recognition by Leicestershire police at the Download Festival last year. Prior publicity about its use was seen as minimal, people didn’t know why it was being used and the media focused on that rather than the rock bands playing at the festival. So, if people don’t know why new technology is being used they begin to criticise it fairly quickly.
As this technology becomes more widespread organisations must learn from incidents like this to make sure it’s used effectively, efficiently and transparently.