22 April 2026

Good morning friends, colleagues, everyone else! Its an absolutely gorgeous spring morning in London (for now at least).

The big news this week, in the UK at least, is the judgment in Big Brother Watch’s challenge to the Met police’s use of live facial recognition. Thompson and Carlo was handed down at 2pm yesterday, which is a roundabout way of saying i haven’t had a chance to read it in detail. I have, however, read it. And so, some initial thoughts.

The first, and most obvious, takeaway is that this is very clearly a huge win for the Met, at least from a PR perspective. I think there will be some very content people in the facial recognition department, and perhaps even a few sore heads today? 

The second takeaway, which is perhaps not so immediately obvious from the media coverage, is that this judgment is far from clear on the actual human rights compliance of live facial recognition. So while this will be presented as a ‘win’ by the Met (and fair enough) what it actually means vis-a-vis the lawfulness of LFR is completely unsettled, in my opinion.

Let me explain, if I can. With all appropriate caveats.

The key feature of the judgment is that it is incredibly narrow. It essentially focused on one component of the ‘in accordance with the law’ test, namely whether the Met police’s LFR policy placed was sufficiently ‘foreseeable’ and placed sufficient limits on officer discretion. The judgment did not address whether the use of LFR was ‘necessary in a democratic society’.

What this means, essentially, is that the court held that the Met’s use of LFR was foreseeable (i.e. not arbitrary) but it did not comment on whether it was ‘necessary’. And necessity is really the key issue with respect to human rights compliance. Necessity essentially asks can the Met’s use of LFR be justified? This was not addressed. And so this leaves a massive unanswered question. Well, a few unanswered questions.

The Court looked only at a component of the ‘in accordance with the law’ test, foreseeability/discretion. It did not look at whether the law (in this case the Met’s policy) limited the use of LFR to what is ‘necessary in a democratic society’. This is quite unusual, at least from a European human rights law perspective. For instance, in Catt v UK (§107), the Court skipped an analysis of the underlying law (which is essentially similar to that in Thompson, and on which it expressed reservations) and jumped straight to necessity. In Glukhin (§78), which is the only case at the European Court of Human Rights to directly address facial recognition, the Court examined the ‘in accordance with the law’ and ‘necessity’ questions together. Addressing in accordance with the law and necessity together makes a huge amount of sense. The purpose of the law is, or at least should be, to limit police action to that which is necessary (thereby ensuring human rights compliance).

So what the judgment does say is that the Met’s policy is not arbitrary. What it does not address is whether it regulates human rights compliant use of LFR. This remains to be seen.

Some of the things not addressed include: What categories of persons (i.e. in relation to what crimes) can be placed on a watchlist (i.e. what crimes can it be considered necessary to use LFR for); what the level of suspicion should be (this is a nearly endless question, and brings in HUGE discretion issues); and whether there should be a likelihood that a person placed on a watchlist will be at the location. Indeed, this second point highlights the problems with not addressing necessity. The Court in effect said that, as long as one person on the watchlist is likely to be at a LFR deployment location then the police can add as many other people to the watchlist as they want (i.e. around 19,000 more people). The – flawed – reasoning underpinning this is that the human rights impact on passersby is the same, irrespective of whether you are looking for one or 19,000 people (passerby get scanned either way). This is only one half of the equation, however. It does not address whether it is necessary to add a person to the watchlist in the first instance. This is really problematic. Necessity applies to those on and off the watchlist. And it highlights the problem with a lack of necessity analysis. 

In effect, at the moment, the police are allowed to set the crime types by policy (so at their discretion, albeit not at the level of individual officers discreation) and then add anyone who is suspected of having committed, to be committing, or will commit one of those crimes to the watchlist. Irrespective of whether they have any connection whatsoever to the actual area of deployment. This is how you end up with watchlists that are now approaching 20,000 people.

The Court also failed to engage – in my opinion – with the changing nature of digital surveillance, and how facial recognition can be just as intrusive as other forms of identification – such as finger printing or DNA – but without the need for actual physical interaction. Indeed, remote identification is arguably significantly more invasive, as it can be done remotely, and at mass scale. I’ve commented more on that in an article in the Modern Law Review (free, here). They are bringing an analogue solution to a digital problem.

The question is why did the complaints construct their case so narrowly. On this I have absolutely no idea. Perhaps it was because the Met’s initial iteration of their policy was so appalling (they updated it during the case) that they were hoping for an ‘easy’ win. The Court was, however, very dismissive of a lot of their arguments. And not addressing necessity as part of the in accordance with the law test just seems very bizarre.

Who knows.

Oh, have just seen BBW have a statement. And are planning to appeal. Given how narrow the initial complaint was I wonder what will come of this. Interested on thoughts.


What i do know is that i am over my word count (significantly) for a newsletter. So enough out of me!

This week i’ll leave you with ‘Without You’ performed by CMAT at the Hootenanny a few years ago. CMAT is incredible. This performance is absolutely unreal.

PS Mythos is obviously the other big news. See below.

Science Direct, Why states are failing to rein in the spyware market

Financial Times, Smartphones to be banned in schools in England

Science Delivers, Get the Data

Financial Times, Ukraine’s drone pilots hit Russian targets from 500km away

The Hill, As AI pushes students to reconsider majors, universities struggle to adapt 

Information Labs, Is a Social Media Ban a Plan We Built a Repository to Find Out 

The Hill, Tensions over AI reach new high after violent attacks

EDRI, Europe shouldn’t “move fast and break things” with fundamental rights

Rest of World News, The Mexican security company with a $1.27 billion surveillance empire

Privacy International, What is digital fingerprinting: Is my device ever truly anonymous?

Digital Rights, India scraps mandatory Aadhaar app plan for smartphones after industry pushback

EFF, Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story (am available to consult at extortionate fees 🙂 )

Rest of World News, Why AI alone cannot fix social problems (wait. Back up).

SMEX, Will national roaming actually keep Lebanon connected in wartime?

BBC, Met considering using AI to help online child sexual abuse cases 

Washington Post, ‘That wasn’t me’: How facial recognition led to a woman being jailed for 6 months

WIRED, Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

BBC, Biometric checks to be rolled out in prisons after mistaken releases 

BBC, Challenge over Met Police’s use of live facial recognition lost 

WIRED, The Shocking Secrets of Madison Square Garden’s Surveillance Machine 

BBC, Met police trials snoop tech platform in push to cuff more London shoplifters

Financial Times, The Mythos cyber scare signals the economics of AI scarcity 

Financial Times, Latest AI models could threaten world banking system, financial officials warn

The Register, Claude Opus wrote a Chrome exploit for $2,283

Financial Times, Anthropic chief Dario Amodei: ‘I don’t want AI turned on our own people’

Washington Post, Anthropic CEO visits White House amid hacking fears over new AI model 

Financial Times, The risks of Mythos are no myth

WIRED, It Takes 2 Minutes to Hack the EU’s New Age-Verification App

Washington Post, The therapist in your pocket: Chatty, leaky — and AI-powered 

The Register, Just like phishing for gullible humans, prompt injecting AIs is here to stay

WIRED, A Humanoid Robot Set a Half-Marathon Record in China (mad videos)

New York Times, Putin’s Army of Drones

The Guardian, Anthropic investigates report of rogue access to hack-enabling Mythos AI

Subscribe to our weekly newsletter