Greetings! From a wet and wintry London. So glad I didn’t go to southern Spain for my week off.
After a couple of weeks of ‘what we all wanted’ whether we’re Nick Cage fans, or teenage boys, this week we have the ‘wow, I’m not sure I want that’ in the form of a robot boxing some lad. It’s pretty impressive, and worth a watch, but also pretty freaky. I should note, this popped up on insta, I made no effort to verify it. And then I thought, this blog has standards. (it does). So I turned to google, and it turns out it’s an AI tool, to replace a human actor. So, generative AI, but not the end of the world. Still, it looks pretty cool.
I haven’t read this article yet, on the digital traces we leave behind. But it looks really interesting, and is now on my January reading list.
I have read this article on retrospective facial recognition and I highly recommend that you do too! Its an article I wrote on RFR, and the idea of compound human rights harms, where multiple human rights are impacted simultaneously (say through a surveillance induced chilling effect), and which – as it stands – poses a challenge for our traditional approach to human rights law, and judicial review. It’s setting the ground for more theoretical work on the nature of human rights, and how it protects (or could protect) the social processes essential to the free development of personality, and democratic functioning.
Oh, I can’t recall if we included this Lighthouse reports piece on ‘France’s digital inquisition‘, about a massive fraud detection algorithm deployed in France, based on profiling approximately half of the population. Obviously, its only used to target white collar fraud.*
Huge thanks, as always to Sarah Zarmsky. We’re here next week, then on a hiatus, but likely back on the 3rd.
This week’s tune is E.V.P. by Blood Orange, because it has mad unexpected Parliament vibes.
Be well.
Amnesty International, EU: Bloc’s decision to not ban public mass surveillance in AI Act sets a devastating global precedent
Brookings, Why mental health apps need to take privacy more seriously
Cryptopolitan, Exclusive: Can AI Help Find A Middle Road for Data Privacy?
Infosecurity Magazine, ICO Warns of Fines for “Nefarious” AI Use
Future of Privacy Forum, A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Nature, Is AI leading to a reproducibility crisis in science?
AI News, Absci and AstraZeneca forge AI partnership to discover cancer treatments
MIT Technology Review, Google Deepmind’s new Gemini model looks amazing—but could signal peak AI hype
MIT Technology Review, Medical microrobots that can travel inside your body are (still) on their way
MIT Technology Review, AI’s carbon footprint is bigger than you think
MIT Technology Review, Meet the 15-year-old deepfake victim pushing Congress into action
UNESCO, Empowering Congolese Judicial Operators with AI and the Rule of Law Training in Brazzaville
UNESCO, The UNESCO Business Council for the Ethics of AI was officially launched
The Register, EU agrees on Act that bans some AIs
WIRED, The EU Just Passed Sweeping New Rules to Regulate AI
The New York Times, Should A.I. Accelerate? Decelerate? The Answer Is Both.
The New York Times, Artificial Intelligence Is an Unreliable Narrator
The Guardian, ChatGPT exploded into public life a year ago. Now we know what went on behind the scenes
The Register, What are you feeding your AI?
The New York Times, Silicon Valley Confronts a Grim New A.I. Metric
The Guardian, AI firms ‘should include members of public on boards to protect society’
The Register, Getty’s image-scraping sueball against Stability AI will go to trial in the UK
The New York Times, How Nations Are Losing a Global Race to Tackle A.I.’s Harms
The Register, Tech world forms AI Alliance to promote open, responsible AI
Financial Times, For true AI governance, we need to avoid a single point of failure
Financial Times, Legal experts step up to defend wave of AI lawsuits
The Register, AstraZeneca bets $247M AI can create a cancer-fighting antibody
The Washington Post, A Brazilian city passed a law about water meters. ChatGPT wrote it.
The Washington Post, Europe reaches a deal on the world’s first comprehensive AI rules
Open Rights Group, DPDI Bill: New ‘welfare surveillance’ proposals target vulnerable people
Algorithm Watch, AI Act deal: Key safeguards and dangerous loopholes
Business & Human Rights Resource Centre, United Nations High Commissioner for Human Rights highlights issues with AI Act in open letter to EU
Algorithm Watch, AI Act drama: Illegitimate deals, irresponsible negotiation hours, and unacceptable pressure games
The National Interest, Don’t Tell Hollywood: You Have Little to Fear from a Rogue AI
Time, We Don’t Have to Choose Between Ethical AI and Innovative AI
Blog Posts
The Conversation, Technologies like artificial intelligence are changing our understanding of war
The Conversation, Israel’s AI can produce 100 bombing targets a day in Gaza. Is this the future of war?
The Conversation, The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future
Academic Literature
*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog
R. de Silva de Alwis, A Rapidly Shifting Landscape: Why Digitized Violence is the Newest Category of Gender-Based Violence
MediaWired (video), Watch AI Horizons: Ethics, Risks, and the Road Ahead
*you didn’t really think that did you?