10 April 2024

Greetings all, and happy Wednesday. Although in an exceptionally (exceptionally!) rare move I am actually preparing this on Tuesday. Obviously out of keen enthusiasm and not a desire to procrastinate.

The big AI news of the week kind of slipped in just as I was publishing the newsletter last week, and that was the reporting on Israel’s use of AI to facilitate its conduct of hostilities in Gaza. You can read the underlying reporting (its worth your time) at +972 mag, and there are some summaries below, including this one from the Washington Post

This reporting raises and makes real a number of (what were previously only) concerns regarding the military use of AI. Some include the compatibility of a known error rate with law of armed conflict rules around the principle of distinction and presumption of civilian status in cases of doubt, there are inevitable issues to do with the accuracy of training data and therefore the reliability of predictions, there is the more pervasive surveillance/predictive tech component linked to the generation of pattern of life profiles (and again the problems with basing targeting decisions on this probabilistic analysis), issues to do with the speed of AI driving the speed of operations, and problematics around the role of a ‘human in the loop’. 

This revelation – coming as it does a few months after revelations regarding ‘Gospel’ (an AI system for generating military objectives (things not people)) – has prompted a number of discussions vis-a-vis the importance of regulating military AI. Some of these can be found at the Opinio Juris symposium on military AI. To be honest, i’m skeptical. New rules may bring focus and clarity, and that may be a good thing. But – at least in terms of addressing the issues raised in +972’s reporting – existing law of armed conflict and international human rights law rules are applicable and should be enforced. There is always the risk that emphasizing the need for new rules, downplays the applicability of, and relevance, and obligation to adhere to, existing rules. 

One thing that this reporting has made dramatically clear, however, is that the near exclusive focus on lethal autonomous weapons system was misplaced – it is the wider use of AI in a military and intelligence context that should be of immediate concern.

For anyone interested,  I have a piece on a human rights (and law of armed conflict) based approach to military AI coming out in a month or so.

OK. That was a bit more than intended. But i think it is a really important story as it makes real a lot of the concerns around AI. Obviously, no discussion on AI and related concerns should detract from the absolute horror of the reality in Gaza.

Kind of hard to run through the other stories after that, so i’ll leave you to peruse the list yourself. I will flag a couple of open source related stories (one of which, of course, prompts this aural intervention), and an interesting AI is not AI story which highlights a lot of the outsourcing, offshoring, exploitation angles of AI.

Also want to flag this really interesting read on the use of tech to police the US-Mexico border. Really well donee. 

Hope you all have a wonderful week.

Today’s sign off song is ‘Yoshimi Battles the Pink Robots’ by the Flaming Lips, because it seems on point, its a great song, and i saw them play the anniversary show for this album a couple of months ago. Its a great album, brings me back to California in 2002. Which is nice.

Stay well.

Websites

Models All The Way Down (good piece, highlight problems with training data, including SAE)

WIRED, Yogurt Heist Reveals a Rampant Form of Online Fraud (and other stories, handy roundup)

Financial Times, Elon Musk predicts AI will overtake human intelligence next year (what do you think?)

Financial Times, OpenAI and Meta ready new AI models capable of ‘reasoning’

Randomly posting this paper on how AI LLM’s actually not good at what they claim:  [2404.01261] FABLES: Evaluating faithfulness and content selection in book-length summarization 

Gizmodo, Amazon Ditches ‘Just Walk Out’ Checkouts at Its Grocery Stores  (coincidental placement)

Reuters, Facial Recognition is Helping Putin Curb Dissent with the Aid of US Tech 

MIT Tech Review, The Download: How China plans to regulate AI

The Guardian, Chinese mourners turn to AI to remember and ‘revive’ loved ones

New York Times, Teen Girls Confront an Epidemic of Deepfake Nudes in Schools

New York Times, Are AI Mammograms Worth the Cost? 

Jacobin, The Grim High-Tech Dystopia on the US-Mexico Border 

The Register, Naver debuts HyperCLOVA X LLM (Asian language LLMs)

The Guardian, Digital trail identifying Israeli spy chief has been online for years 

Bellingcat, Kinahan Cartel: Wanted Narco Boss Exposes Whereabouts by Posting Google Reviews

Business Insider, I tried out the new smart mirrors in H&M’s fitting rooms in Soho. I liked the freedom it gave me, but I see room for even greater potential.

Open Rights Group, Home Office CCTV: free mass surveillance? 

Business Insider, US and UK jointly commit to safety testing AI models, without explicit mention of human rights

The Strategist, How the Australian Border Force can exploit AI 

Defense One, Lawmakers want answers from Pentagon on AI developments with Australia, UK

The Washington Post, Israel Offers a Glimpse into the Terrifying World of Military AI

UPI, Israel defends using AI database Lavender of alleged Hamas targets 

The Conversation, AI will not revolutionize business management but it could make it worse 

WIRED, Students Are Likely Writing Millions of Papers With AI 

The Washington Post, A $400 toothbrush is peak AI mania

The Conversation, AI may develop a huge carbon footprint, but it could also be a critical ally in the fight against climate change 

Towards Data Science, Chronos: The Rise of Foundation Models for Time Series Forecasting | Towards Data Science

MIT Tech Review, It’s easy to tamper with watermarks from AI-generated text | MIT Technology Review

Blog Posts

Opino Juris, Symposium on Military AI and the Law of Armed Conflict: Introduction (there are lots of interesting posts here)

JustSecurity, Is Generative AI the Answer for the Failures of Content Moderation? 

DLA Piper, The first Czech case on generative AI | Technology’s Legal Edge 

Academic Literature

*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog 

International Affairs, Global AI governance: barriers and pathways forward

Big Data & Society, Imaginaries of democratization and the value of open environmental data: Analysis of Microsoft’s planetary computer

International Review of Law, Computers & Technology, Generative AI and deepfakes: A human rights approach to tackling harmful content

International Review of Law, Computers & Technology, Facial recognition surveillance and public space: protecting protest movements

Scientific American Can AI Replace Human Research Participants? These Scientists See Risks (no shit)

Subscribe to our weekly newsletter