UK Secretly Scans Passport/Immigration Data with Facial Recognition
The UK Home Office stands accused of a staggering lack of transparency after it was revealed that police forces have been secretly conducting facial recognition scans against vast databases containing passport and immigration photographs. This clandestine practice, which has seen a dramatic surge in recent years, has sparked fierce condemnation from privacy campaigners who brand it “astonishing,” “dangerous,” and “Orwellian.”
Investigations, primarily driven by Freedom of Information requests from groups like Big Brother Watch and Privacy International, have brought to light that the UK government has allowed images from its passport and immigration databases to be made available to facial recognition systems without public or parliamentary knowledge. These databases collectively hold an estimated 150 million photographs, including approximately 58 million headshots from biometric passports and a further 92 million from immigration records and visa applications. This far exceeds the Police National Database (PND), which contains around 20 million images, predominantly of individuals who have been arrested or are of police interest.
The scale of this covert surveillance is escalating rapidly. Police searches against the passport database jumped from just two in 2020 to 417 by 2023. Similarly, scans using immigration database photos surged from 16 in 2023 to 102 in 2024, representing an almost sevenfold increase. Privacy groups, including Big Brother Watch and Privacy International, have written to both the Home Office and the Metropolitan Police, urgently calling for a ban on this practice. They argue that converting millions of innocent citizens’ passport photos into a police facial recognition database without explicit consent or a clear legal basis constitutes an “historic breach of the right to privacy.”
Critics highlight the severe risks of misidentification and potential injustice inherent in such systems, particularly given the lack of robust safeguards. The absence of a dedicated legal framework for facial recognition in the UK has long been a point of contention, with existing deployments operating under general data protection and human rights principles alongside non-binding guidance. A landmark 2020 Court of Appeal ruling in Bridges v South Wales Police already found police use of live facial recognition unlawful due to “fundamental deficiencies” in the legal framework, underscoring the pressing need for defined legal parameters.
While the Home Office has indicated it is working towards formulating a policy and the Home Secretary has expressed a desire for a “clear legal framework,” no formal bill has yet been published. This comes amidst broader discussions about the pervasive spread of facial recognition technology, not just in policing but also in schools and retail. The Metropolitan Police, for instance, has recently announced plans to more than double its use of live facial recognition deployments, citing budget cuts and a need to tackle serious crime, a move that has further alarmed civil liberties groups.
Experts like the Ada Lovelace Institute have warned that the UK’s fragmented approach to biometric technology governance creates a dangerous legal grey area, eroding public trust and accountability. The European Union, in contrast, adopted the AI Act in May 2025, which bans real-time facial recognition in public spaces except under narrowly defined law enforcement circumstances, putting additional pressure on the UK to clarify its own stance.
This revelation about secret database scans intensifies the urgent call for comprehensive legislation. Without a clear, statutory framework and genuine parliamentary oversight, millions of individuals remain subject to a surveillance capability that operates largely in the shadows, raising profound questions about privacy, civil liberties, and the democratic accountability of state power.