(eeeek) its the 18 September 2023 newsletter. Finally.

Greetings, and em, I hope you all had a really nice Monday! And Tuesday. And that Wednesday was decent. And Thursday is going ok for you so far 🙂

So, this newsletter is obviously very late this week. Apologies, but it has been a mad week.

There’s quite a lot in this week’s newsletter, but since we’re all thinking digital surveillance, I’ll begin by flagging this joint statement on freedom of assembly and the misuse of digital technologies, prepared by the regional experts on freedom of assembly. This week the Metropolitan Police Chief also had a statement extolling the virtues of facial recognition, in particularly retrospective facial recognition (which is absolutely not getting sufficient attention, and is – in effect – being rolled out quietly, while attention focuses on live facial rec, in the UK at least).

Having started with the serious international experts, I shall flag, without any comment whatsoever, this report by a British judge about how ‘jolly useful‘ ChatGPT was when writing a part of their judgment. Since we’re talking the criminal justice system, WIRED also have a piece on prisoners being used to train AI models. The use of exploitative labour in this context is something that has featured in the newsletter before: if anyone knows any initiatives actually trying to counter this, I’d love to know. While on the subject of WIRED (so many neat seque’s today), this piece highlights some of the complexities arising with respect to government’s do we/don’t we use generative AI.

Really interesting story in the Washington Post about a chatbot being used to help people seek safe abortions. In light of the surveillance and criminalisation of abortion in the US it seems like a potentially really useful but at the same time quite risky proposition. On that note, I’m doing research into the chilling effects of digital surveillance and this is an area where hopefully research could help raise people’s stories (the research would be interview focused). It is, of course, any exceptionally tricky area to research, but if anyone on here has any contacts with abortion providers in the US who might be interested, please do reach out.

In a rare move, there is a story on AI being used to catch fraudsters, that doesn’t – are you sitting down? – target the most vulnerable people on social welfare. The Register has a piece on the IRS using AI to detect fraud amongst corporations and the wealthy.

So much chat today. I’m obviously usually much more awake on Thursdays!

QMUL are on local strike this week – to protest management taking 113 days salary from some staff (and counting) – I know totally shocking union breaking shenanigans. So, in solidarity, today’s tune is… There is Power in a Union by Billy Bragg.

Be well. And hopefully I’ll be in your inbox Monday morning!

Thanks as ever to Sarah Z.

Telegraph, British judge uses ‘jolly useful’ ChatGPT to write ruling

WIRED, The Twisted Eye in the Sky Over Buenos Aires 

The Guardian, Self-publishers must declare if content sold on Amazon’s site is AI-generated 

Insider, AI has a big, dirty problem that is tarnishing Big Tech’s environmental image 

WIRED, AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous 

AI News, White House secures safety commitments from eight more AI companies 

AP News, AI project imagines adult faces of children who disappeared during Argentina’s military dictatorship 

WIRED, These Prisoners Are Training AI  

MIT Technology Review, AI can help screen for cancer but there’s a catch 

MIT Technology Review, Robots that learn as they fail could unlock a new era of AI 

Council of Europe, European Parliamentary Association learns about the Council of Europe AI policy   

UNESCO, Designing Institutional Frameworks for the Ethical Governance of AI in the Netherlands

The Washington Post, The abortion bot will see you now

The Washington Post, Online safety measures, altered DeSantis video and other news literacy lessons  

WIRED, The AI Detection Arms Race Is On

Financial Times, The global race to set the rules for AI 

The Guardian, Paedophiles using open source AI to create child sexual abuse content, says watchdog

WIRED, AI-Powered ‘Thought Decoders’ Won’t Just Read Your Mind—They’ll Change It 

The Register, The IRS is using AI to catch tax-dodging rich folks 

Financial Times, UK researchers start using AI for air traffic control 

The Guardian, Facial recognition could transform policing in same way as DNA, says Met chief

EDRi, The Stop Scanning Me movement organised a mass protest in Berlin against dangerous surveillance law       

Business & Human Rights Resource Centre, Australia: New code will require AI-made child abuse & terrorist material be removed from search results  

Privacy International, Judgment says that UK cannot digitally spy on people outside its borders without accountability  

Defense One, How China could use generative AI to manipulate the globe on Taiwan 

The Strategist, Counterproliferation in the age of AI    

Reports

Algorithm Watch, The AI Act and General Purpose AI 

Academic Literature

*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog 

M. Chung, W. Moon and S. M. Jones-Jang, AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages M. Can Sati, The Attributability of Combatant Status to Military AI Technologies under International Humanitarian Law

Subscribe to our weekly newsletter