Changing faces: The differing approaches to regulating facial recognition technology in the UK and the EU


Following recent UK government action on regulating artificial intelligence, there is one stark deviation from concurrent discussions in the European Union: the regulation of facial recognition and biometric technology.  

In June, the European Parliament adopted its position on the Artificial Intelligence (AI) Act containing a blanket ban on facial recognition in public spaces in real-time and after the fact, labelling it “intrusive and discriminatory”. The final text is now being negotiated with the Council of the EU, and despite some differences between the two institutes, there is likely to be some restrictions on the use of facial recognition technology in public areas.

By contrast, the AI White Paper released by the UK Government in March 2023 makes no mention of biometrics and explicitly states the government will not be making “blanket new rules for specific technologies or applications of AI, like facial recognition”. It instead expects the use of biometric or facial recognition technology will adhere to its wider five principles for AI including safety and transparency. Given that there are similar legislative starting points such as on data protection, with both jurisdictions applying the General Data Protection Regulation, this raises the question of why the policy approach differs.

Across the European Union there are strong civil liberties interest groups, particularly within member states, who have been advocating against the use of facial recognition. This aligns with the strong view in Brussels that protecting civil liberties equates to strong constraints on personal data collection. However, there is an equally vocal civil liberties contingent in the UK. Big Brother Watch has argued police use of facial recognition technology is “an enormous expansion of the surveillance state” and “dangerously authoritarian”. Likewise, the Open Rights Group described it as “our demise from democracy” and “a slippery slope”. The influence of these organisations was made clear in recent years when they, and others, managed to secure significant amendments to online safety legislation.  

Indeed, the fundamental difference to the EU is that in the UK civil liberties concerns are more associated with issues like freedom of expression and less associated with protection of personal data, with the latter often trumped by matters of security. Despite guidance from the Information Commissioner’s Office that a strong policy framework is required, South Wales Police has deployed the technology at football matches, London Metropolitan Police used it at the coronation celebrations, and just last month, Nottinghamshire Police launched a new trial with facial recognition technology attached to breathalyser devices to ensure offenders are staying sober and taking the test regularly. This is being driven by politicians such as the Minister for Crime, Policing and Fire, Chris Philp, formerly Minister for Tech, who has called for AI technology to be “embed[ded] in policing”, rolled out nationwide and even called for the integration of facial recognition technology with police body-warn video. 

This policy approach is likely to be challenged in the coming months as part of debates around the Data Protection and Information Bill, which looks to abolish the role of the surveillance camera commissioner (and an associated code of practice). The concerns about this change will be voiced in the House of Lords by mostly Liberal Democrat and crossbench peers. However, in the House of Commons, Philps’ view is characteristic of the broader sentiment within the Conservative party and with Labour prioritising a strong law and order message, data protection concerns similar to that in the Brussels policy discourse are less likely to alter the UK’s approach.  


The views expressed in this note can be attributed to the named author(s) only.