Greetings all, and happy absolutely gorgeous spring day in London!
Sorry for the radio silence the last while, i’ve been absolutely flat out working on a big report (more soon), while also swanning about to quite a few interesting events. Hi to everyone I met, and thanks for the invites!
In personal news I am now part of the Centre for AI & Digital Policy’s Global Academic Network (with some very impressive people) so quite looking forward to seeing what that involves over the next while. And speaking of AI – and smooth sequeways – Yvonne (at TRUE) and others have released a new (if depressing) report on how chatbots are reshaping violence against women and girls.
The (quite blatantly) illegal war on Iran has also been a focus of AI-related chat. I haven’t been able to find a huge amount of detail (if you have, please share). But the Washington Post has a piece on Claude’s role, essentially noting that it was used both to identify and prioritize targets. The article reports that targets were developed for months prior to the attack (as you would expect). This is notable, as the infamous attack on the school in Minab occurred on the first day of the war (see HRW’s release). From an IHL perspective, what is really concerning is that if AI was involved in this attack, then it demonstrates unbelievably serious flaws in the target verification process. This is particularly concerning precisely because day 1 targets were presumably selected far in advance with extensive time for review. These targeting ‘errors’ are therefore only likely to increase exponentially as a conflict prolongs, and as the time for review shrinks from months or weeks to, presumably, minutes or seconds (as was apparently the case in Gaza).
Also in the context of conflict, is this piece in the Atlantic, on the role of AI generated images and their impact on the information ecosystem (trust, essentially). See also, this piece on how the companies involved aren’t AI firms, they’re defence contractors, with the line ‘Gaza was the laboratory. Minab is the market.’
The EHRC have released a summary of their intervention in The King (on the application of Thompson and another) v Commissioner of Police of the Metropolis. Which is the case taken against the Met police over their use of facial recognition technology. It was really good to see so many aspects of their intervention, notably the focus on watchlist construction (the who and the why), the discussion of chilling effects, and reference to the UN Model Protocol and the limitations it establishes around surveillance in the context of protests. Privacy International’s response is also here. I’m afraid i haven’t read it yet though (it is on the list, have literally just seen it).
It is noteworthy that, while waiting on the results of the judicial review and in the middle of a government consultation, the rollout of facial recognition continues apace. See the BBC on the use of operator initiated facial recognition.
And that’s it from me, I’ve got to prepare for something that hopefully I’ll fill ye in on next week.
I’ll leave you with Long Lankin by Alasdair Roberts. Because someone described it to me as a palate cleanser (you know who you are) and we probably all need a palate cleanser every now and then.
Be well. Stay Lovely.
— —
True Project, New Research on Chatbots and Violence against Women and Girls,
Computer Weekly, Revealed: How HMRC has been quietly building surveillance capabilities
New York Times, OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
404 Media, CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements
INSS, AI Use in Operation Roaring Lion
ICO, ICO writes to Meta over ‘concerning’ AI smart glasses report
Washington Post, Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud
The Atlantic, The Fake Images of a Real Strike on a School
The Guardian, These aren’t AI firms, they’re defense contractors. We can’t let them hide behind their models
The Hill, Anthropic clash with Pentagon fuels government surveillance fears
The Hill, OpenAI sued over Canada school shooting
BBC, Metropolitan Police to trial handheld facial recognition devices
SMEX, Digital Rights Amid a Regional War: Threats and Responses – SMEX
Financial Times, UK high streets turn to facial recognition in fight against shoplifting
Open Rights Group, Home Office use of AI in asylum cases likely to be unlawful, legal opinion finds
BBC, Mistaken arrest victim from Southampton says police were laughing
BBC, Police Make First Facial Recognition Arrest in Bradford
BBC, Norfolk Police vows to be open about facial recognition use – BBC News
BBC, Manchester sex offender caught by live facial recognition – BBC News
The Conversation, Is someone watching you? Facial recognition tech is here and Canada offers little privacy protection
Financial Times, Why it’s hard for humans to have the final say over AI (The FT gets on to the problems with the human in the loop)
The Conversation, Iran war shows how AI speeds up military ‘kill chains’
Opinio Juris, (Ir-)Responsible by Design? Corporate Guardrails and the Governance of Military AI
Just Security, Human Rights at Risk in the Sprint Toward AI SovereigntyJust Security, Iranian Attacks on the Amazon Data Centers: A Legal Analysis