Greetings, and em, I hope you all had a really nice Monday! And Tuesday. And that Wednesday was decent. And Thursday is going ok for you so far 🙂
So, this newsletter is obviously very late this week. Apologies, but it has been a mad week.
There’s quite a lot in this week’s newsletter, but since we’re all thinking digital surveillance, I’ll begin by flagging this joint statement on freedom of assembly and the misuse of digital technologies, prepared by the regional experts on freedom of assembly. This week the Metropolitan Police Chief also had a statement extolling the virtues of facial recognition, in particularly retrospective facial recognition (which is absolutely not getting sufficient attention, and is – in effect – being rolled out quietly, while attention focuses on live facial rec, in the UK at least).
Having started with the serious international experts, I shall flag, without any comment whatsoever, this report by a British judge about how ‘jolly useful‘ ChatGPT was when writing a part of their judgment. Since we’re talking the criminal justice system, WIRED also have a piece on prisoners being used to train AI models. The use of exploitative labour in this context is something that has featured in the newsletter before: if anyone knows any initiatives actually trying to counter this, I’d love to know. While on the subject of WIRED (so many neat seque’s today), this piece highlights some of the complexities arising with respect to government’s do we/don’t we use generative AI.
Really interesting story in the Washington Post about a chatbot being used to help people seek safe abortions. In light of the surveillance and criminalisation of abortion in the US it seems like a potentially really useful but at the same time quite risky proposition. On that note, I’m doing research into the chilling effects of digital surveillance and this is an area where hopefully research could help raise people’s stories (the research would be interview focused). It is, of course, any exceptionally tricky area to research, but if anyone on here has any contacts with abortion providers in the US who might be interested, please do reach out.
In a rare move, there is a story on AI being used to catch fraudsters, that doesn’t – are you sitting down? – target the most vulnerable people on social welfare. The Register has a piece on the IRS using AI to detect fraud amongst corporations and the wealthy.
So much chat today. I’m obviously usually much more awake on Thursdays!
QMUL are on local strike this week – to protest management taking 113 days salary from some staff (and counting) – I know totally shocking union breaking shenanigans. So, in solidarity, today’s tune is… There is Power in a Union by Billy Bragg.
Be well. And hopefully I’ll be in your inbox Monday morning!
Thanks as ever to Sarah Z.
MIT Technology Review, AI can help screen for cancer but there’s a catch
MIT Technology Review, Robots that learn as they fail could unlock a new era of AI
The Washington Post, The abortion bot will see you now
The Washington Post, Online safety measures, altered DeSantis video and other news literacy lessons
Financial Times, The global race to set the rules for AI
The Register, The IRS is using AI to catch tax-dodging rich folks
Financial Times, UK researchers start using AI for air traffic control
Business & Human Rights Resource Centre, Australia: New code will require AI-made child abuse & terrorist material be removed from search results
The Strategist, Counterproliferation in the age of AI
Algorithm Watch, The AI Act and General Purpose AI
*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog
M. Chung, W. Moon and S. M. Jones-Jang, AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages M. Can Sati, The Attributability of Combatant Status to Military AI Technologies under International Humanitarian Law