It sounds like a book that the genius neurologist, the late Dr. Oliver Sacks might have written but it’s a true surveillance story that caught the attention of industry professionals last week. Suppliers, manufacturers and installers at the Global MSC conference on surveillance and AI came together to share their expertise, experience and knowledge in a dynamic pre-lockdown style gathering. Presenters from oversight, research, policing and local authority backgrounds posed some challenging questions, of themselves as much as of others and one that kept resurfacing was, predictably, the acceptability of smarter surveillance technology within our lives. Less predictable was the news report telling how a traffic enforcement camera had just mistaken a pedestrian in a knitted jumper for a car. Having decided that the word on the woman’s sweater was a number plate, the camera system issued an immediate fine to the bewildered registered keeper of a vehicle hundreds of miles away for driving in a bus lane. His cherished plate is vaguely similar to the word “knitter” as displayed on the pedestrian’s woollen jumper. While somehow farcically reassuring that the rise of the smart machine to replace us all is perhaps further off than some might think, this incident reminded us that, while human surveillance decision-making is approximately right, when the tech fails it’s precisely wrong. (This is also one of the 5 reasons why “if you’ve done nothing wrong you’ve nothing to worry about” is probably the poorest surveillance reassurance you can receive but back to the conference…)
The camera that unbelievably mistook a woman’s jumper for a car was a whimsical aside from the floor but raised some profound questions for the surveillance community at the conference. The programme for this return to ‘old school’ conferencing was energetic and interactive after more than a year of Teaming and Zooming and Webexing. It covered a range of more conventional topical subjects around AI and surveillance with exhibitors and sponsors showcasing what can now be done with the state-of-the-surveillance-art.
The principal surveillance question that refuses to go away was naturally facial recognition (plainly not fitted to the traffic camera) and the conference coincided with significant public interest in the use of the technology in schools. I’ve said many times that, in my view there’s nothing intrinsically wrong with facial recognition technology, live, retrospective or otherwise and in the context of law enforcement it has a legitimate role to play as any other tactical option. We’ve been using facial recognition for a while now in our private lives and in other settings it offers some wonderful opportunities to enrich people’s lives, from ensuring the safety of care home residents with dementia to helping visitors navigate around museums and receive a commentary on what exhibit they’re looking at. The exam question this week was the proper place for live facial recognition within a school setting. It’s an interesting one, particularly as I have no direct responsibility for this aspect of intrusive surveillance but the answer remains the same. As ever, it’s the use and the user that are the key and we know from elsewhere in the world that this type of surveillance technology can be used in some very sinister ways under the banner of ‘education’.
The conference considered the regulation of public space surveillance more widely. In England and Wales if the police or local authorities want to use surveillance cameras in public space they have a legal duty to follow the government’s Code of Practice. That’s because Parliament recognised the difference between your decision to sign up to new technology for your own personal convenience and the State’s using it without your express – or even implied- agreement. The Code says that if there’s a less intrusive way of achieving the same objective, the police or local authority should use it. The same rule would apply in schools and if you can achieve largely the same result by using, say, a PIN or QR code, you should.
There are two real challenges with biometric surveillance technology. The first is that it’s SO useful, once you install it for one function you’ll probably want to expand it into other obvious areas. This is why your mobile phone has become the digital Swiss Army knife. Organisationally, if you’ve accepted the case for facial recognition as an appropriate tool in one transactional area – say paying for school meals – there will be other equally compelling use cases involving the same considerations and same arguments – paying for other things like school events or equipment, or the never ending fundraising. It’s difficult to resist those arguments if you’ve already accepted them once; it’s even harder to resist them when you’ve invested good money in the technology and it can be expanded readily and cheaply. Of course, once you’ve installed the new system, you no longer need the old analogue ways of doing things – tills, cashiers, desk sergeants – and so you strip them out. This means there is no “less intrusive” alternative now because you got rid of it, which (artificially) strengthens your case for moving to the new technology in any future settings. This combination of ‘function creep’ and self-fulfilling proportionality is not so much a problem in your own home but a little more significant when it’s your school or local authority or the police. Even in the school scenario there is an inequality of bargaining power in any consideration of whether to ‘opt in’ at the start. What does ‘opting out’ really look like? Is it a once-and-for-all choice or will it come round again in light of experience with the new technology? You can reverse out of many consumer decisions if you change your mind; can you in this case? What are the implications for your opted-out child standing in the dinner queue with their peers? It’s easy to see how tasks such as registration and attendance or safeguarding needs in schools would benefit from facial recognition capabilities but ‘function creep’ would mean you inevitably arrive at a point where the camera in the dining area can recognise who's playing Joseph in the nativity play. With the police your bargaining position may feel even less balanced. You also have to factor in the tech going wrong – sometimes in ways you hadn’t predicted.
The second challenge with biometric surveillance is that it’s so discussible. People tend to have a view on what’s acceptable in the use of intrusive technology for different purposes and when it’s done by the State with all its apparatus of enforcement some feel surveillance monitoring to be highly questionable - especially as the Government doesn’t yet follow its own Surveillance Camera Code; when children are involved the sensitivities and risks are amplified. The law has also recognised the specific vulnerability of children and young people when you capture and retain their biometrics – in fact this is the reason behind the Protection of Freedoms Act that created my roles and functions almost 10 years ago.
In a cautionary tale of how viral narratives can take some recovering from even when disproven beyond all doubt, Professor Martin Innes told the Parable of the Zombie Racoons and how fake news panic had created not one but two social media frenzies in the US. His consonant lessons for us on how a great story will not always be quieted by proven facts began to make me doubt the camera and the jumper story.
But the facts from the last decade show a huge increase in public surveillance. When measured in cameras-to-people, London was recently ranked the 3rd most surveilled city on Earth (having an estimated 691,000 cameras for 9,425,622 people = 73.31 cameras per 1,000 population); in cameras per square mile, that’s 1,138.48 cameras making London the 2nd most surveilled city in the world. Add in mobile camera platforms such as drones and wearable devices and it gets more speculative - and when privately owned and operated cameras are factored in, it’s anyone’s guess (literally). This increase in surveillance has been matched by the increase in public attention it has drawn. Much harder to quantify, public concern still counts. It doesn’t lend itself to worries-per-1,000 parents or miles-by-anxious-motorist but it is as real and present as the cameras in our streets, bus lanes, workplaces and schools.
Conference attendees also heard the noise that doorbells are currently making and how their inbuilt cameras are often pointing at more than just the owner’s property. Last week’s county court judgment against a householder who was being sued by his neighbour for interfering with her privacy raised some further, highly topical surveillance discussion. The expansion in privately owned cameras has brought by a corresponding increase, not just in the sharing of stories but the sharing of images, with the police and other bodies, either of the citizen’s own volition or in response to the now ubiquitous appeals for dashcam, GoPro, doorbell or other captures. When it needed a fallible human with a finite attention span to monitor and analyse it, all this newly-enabled aggregated surveillance data had limited practical use: there is simply too much of it. But advances in video analytics and systems for combining, categorising and editing these datasets now allow very significant uses of this new surveillance capability. Again, that’s not a bad thing in itself: every day there are significant investigations of heinous criminality that have been greatly helped by this pooled technology and there will be many more ahead.
As the conference also heard, the rush towards omniveillance means shifting from on-premises solutions to ‘the cloud’ (a brilliantly conceived fluffy euphemism for putting your data and faith in someone else’s computer). This suggests a future where real-time biometric surveillance allows the State to crowdsource video data from companies and public services (like schools) adding in CCTV feeds and AI capabilities. These technological advances will coalesce to allow commercial businesses and householders to ‘plug’ their cameras into police and local authority networks offering the power for total public surveillance.
From the debates and presentations we are, it seems to me, already building a dependency on aggregated surveillance imagery in high harm areas such as terrorism and serious organised crime – our public services’ reliance on it may soon mean that CCTV forms part of the country’s Critical National Infrastructure – or at least our critical local infrastructure. And like most of the established critical national infrastructure, it’s largely in private ownership. When the Surveillance Camera Code of Practice was first published, the BSIA put the ratio of private to public cameras at 70:1; it is a reasonable hypothesis that this relative imbalance will have increased exponentially since. Security and intelligence journalist Philip Ingram led us through a fascinating presentation about the critical role that public space surveillance played in the investigation into the Novichok poisoning of UK citizens in Salisbury by Russian agents and spoke of Chinese cameras being installed around the country with ‘hidden’ latent functionality such as the ability to read number plates (and presumably our clothing). Against this background, the risks highlighted by the Foreign Secretary over the weekend about dependency within parts of our critical national infrastructure begin to move to closer to our high streets, our hospitals and our schools.
If we attain it – out of necessity or inadvertance – ‘total public surveillance’ will be delivered by the commercial sector operating under managed contracts with public services such as the police and local authorities, augmented by citizen-generated data feeds. But to what standards and at what cost? The risks from cyber attack and other security related issues are central to the professional manufacture, installation and operation of surveillance systems and are directly addressed in the Surveillance Camera Code. Companies are expected to conduct and report the results of technical penetration tests but what about the ethics of operators and suppliers? How many carry out ‘pen testing’ of their ethical and corporate social responsibility arrangements at all let alone with the same degree of transparency that they apply to their technical values?
Closing with a panel session covering a range of challenging issues, technological, homological and biometric the conference left me better informed and wanting to know more. I’m very grateful for having been invited to take part in this excellent event and I’m still smiling at the baffled motorist in Surrey who felt he’d been ‘stitched up’ by the camera that mistook a jumper for his number plate.