Amid all the traditional pomp and ceremony for which we are known around the world the recent coronation of King Charles III provided a glimpse into the future of policing, not just in this country, but globally.
While an estimated global TV audience of around 300 million looked on, seeking out well-known faces in the crowd, the use of live facial recognition cameras by the police made the event in London the largest public deployment of Artificial Intelligence (AI) driven policing tech in British history, possibly even in world history. I am certainly not aware of any on a larger scale.
It is now widely known that it is the de facto policy of the Government to put this kind of AI-driven facial recognition at the very heart of British policing. Its use at the Coronation may be a significant step along that road.
It is not my job as Biometrics and Surveillance Camera Commissioner to decide whether or not facial recognition should become a cornerstone of policing in this country, though I have certainly advocated for its accountable and proportionate deployment in appropriate circumstances. How the opportunities it offers should be defined and regulated are questions for parliament to decide on behalf of the citizen and for the courts where its use is challenged.
It is, however, part of my job to try to ensure that where the police do use facial recognition capability, its use complies with the very limited rules that already apply to the public space surveillance by the police. It is also part of my role to draw attention to the fact that oversight and regulation in this increasingly important area of public life is incomplete, inconsistent and incoherent.
While the emergence of ChatGPT may have been the first time that many people really became aware of the power and potential of modern AI, generative AI has actually been with us in many fields for some time now. But its use by the police has a particular significance – legally and societally – and live facial recognition of the kind deployed at the coronation is probably policing’s greatest technology challenge right now.
The AIs in play here have made it possible to explore and exploit the vast pools of unstructured data generated by digital camera systems. AIs can now, working at incredible speed, effectively sift through previously overwhelming masses of source material to not only pick out human faces and compare them with others for which there is a reference to match against (such as a custody image taken by police when they arrested someone), but they can now also recognise events, like a goal being scored in a soccer match, a car crash, or a fight. Some systems can, with varying degrees of success, identify the gender, ethnicity, age and even emotional state of individuals caught on camera.
It is undoubtedly powerful, and newly intrusive technology. Some people are so worried about its impact on our human rights, and on our privacy in particular, that they want an outright ban on its use, not just by the police but also by commercial organisations. Realistically, that ship sailed some time ago; this AI technology is already here and already in use.
I am convinced that modern facial recognition, and other AI-driven biometric surveillance technologies in the pipeline, are potentially too useful an advance in the fight against crime and terrorism for us to turn our noses up at. And while the many legal issues have yet to be defined let alone tested, some victims and their loved ones will not forgive us for eschewing legitimate tools that could have changed the outcome of events which devastated their worlds. Organised crime groups and terrorists already exploit this technology.
So, the question is no longer about whether the police should use it, but about how we can ensure that they use it well. How we strike the right balance between the need to protect privacy and other rights, and the imperative to fight crime effectively. What sort of safeguards are needed so that the public can feel confident that when the police do deploy facial recognition, that they will do so in line with a set of sensible rules that parliament has agreed on behalf of the citizen?
The Government’s Surveillance Camera Code is the only legal instrument to address the police use of live facial recognition directly. Approved by Parliament last year, the amended Code’s purpose is “to enable operators of surveillance camera systems to make legitimate use of available technology in a way that the public would rightly expect and to a standard that maintains public trust and confidence.”
It is perhaps unsurprising then that the police, I and others, are wondering why the Data Protection and Digital Information Bill is in the process of scrapping this enabling Code that has public expectation and trust at its heart and not replacing it with anything.
The division of opinion over the police use of facial recognition in the UK possibly mirrors that of the coronation itself, but there is one aspect on which there is almost complete agreement: the urgent need for better oversight and regulation.
If the coronation heralds the start of a new era for our evolving relationship with the monarchy, maybe the deployment of AI-driven facial recognition at the event ought to mark the beginning of a fresh approach to our relationship with state surveillance that will increasingly rely on this powerful technology.
We may not be that far away from using an AI like ChatGPT to draft warrants or witness statements, yet we have the scantest regulatory framework under which those using the technology will be held to account. Personally, I favour a principles-based approach over a rigid, statutory one so that it will be flexible enough to accommodate the new, related technologies that are likely to follow. Demonising the technology is irrational and banning its use in every case is inviting operational disaster.
Some of our most challenging crime areas illustrate what a footrace between law and technology looks like and the law is always hopelessly outrun. But, whatever the final answer, the first steps must be to understand the public attitude and the place for intrusive AI-enabled surveillance by the state in the future.