AI & Human Rights Newsletter 5 December 2022

Greetings! Very chilly greetings if you are in or around London (or anywhere thats cold I guess).

We have a bumper newsletter today. So. Much. Content.

As an experiment (thanks to Sarah Zarmsky) today’s post was written by ChatGPT, pasted below. Its quite coherent, but very off topic. I guess generation is difficult given the format, other examples of direct questions I’ve seen online have been much more effective. As I have said before, this type of text does bear striking similarity to some essays received this year (this is actually covered by the Guardian, below, although I would question the use of the word ‘stuns’).

And so, without further ado, I give you our AI overlords…


‘We tested AI interview tools. Here’s what we found.’, MIT Technology Review

‘San Francisco will allow police to deploy robots that kill’, AP News

‘Argonne researchers win Gordon Bell Special Prize for adapting language models to track virus variants’, Argonne National Laboratory

‘What Does Meta AI’s Diplomacy-Winning Cicero Mean for AI?’, Communications of the ACM

‘Lawsuit Takes Aim at the Way AI is Built’, The New York Times

‘The Long Road to Driverless Trucks’, The New York Times

‘Now AI can outmaneuver you at both Stratego and Diplomacy’, TechCrunch

‘Monarch delivers its first robot tractor’, TechCrunch

‘Biotech labs are using AI inspired by DALL-E to invent new drugs’, MIT Technology Review

‘How to estimate and reduce the carbon footprint of machine learning models’, Towards Data Science

‘A Hacked Newsroom Brings a Spyware Maker to U.S. Court’, The New Yorker

‘A New Spin to Ethical AI: Trolley Problems with GPT-3’, Towards Data Science

‘Human creators stand to benefit as AI rewrites the rules of content creation’, MIT Technology Review

‘The AI myth Western lawmakers get wrong’, MIT Technology Review

‘AI systems: a power factor in the data protection balance of rights and interests’, RAILS

‘In defence of synthetic data: how synthetic data can be used as a privacy-enhancing technique during the early stages of an AI system’s lifecycle’, DLA Piper

‘AI bot ChatGPT stuns academics with essay-writing skills and usability’, The Guardian

‘OpenAI tweaks ChatGPT to avoid dangerous AI information’, The Register

‘Football’s VAR is a lesson in flawed technology’, Financial Times

‘How the Collapse of Sam Bankman-Fried’s Crypto Empire Has Disrupted A.I.’, The New York Times

‘Can police use robots to kill? San Francisco voted yes’, The Washington Post

‘San Francisco’s Killer Robots Threaten the City’s Most Vulnerable’, WIRED

‘Effective Altruism is Pushing a Dangerous Brand of ‘AI Safety’’, WIRED

‘Creative AI Is Generating Some Messy Problems’, The Washington Post

‘We built an algorithm that predicts the length of court sentences – could AI play a role in the justice system?’, The Conversation

‘US Justice Dept reportedly checking AI rent-pricing biz RealPage’, The Register

‘Spooky entanglement revealed between quantum AI and the BBC’, The Register

‘The TSA’s facial recognition technology, which is currently being used at 16 major domestic airports, may go nationwide next year’, Business Insider

‘How China’s Police Used Phones and Faces to Track Protesters’, The New York Times

‘TSA now wants to scan your face at security. Here are your rights.’, The Washington Post

‘Algorithmic elections: How automated systems quietly disenfranchise voters’, Algorithm Watch

Blog Posts

‘Three Individual Criminal Responsibility Gaps with Autonomous Weapon Systems’, Marta Bo, OpinioJuris

‘Is there a Right to be Protected from the Adverse Effects of Scientific Progress and its Applications?’, Andrew Mazibrada, EJIL: Talk!


‘Paris Peace Forum 2022 Selects UNESCO’s AI Capacity Building Project for the Judiciary’, UNESCO

‘Digital Services Act: Commission is setting up new European Centre for Algorithmic Transparency’, European Commission

‘The Guardian – Chinese state-owned surveillance company launches sinister ethnicity recognition tech while facing UK ban’, Big Brother Watch

‘The Future of Artificial Intelligence’, Human Rights Watch

Journal Articles

*Disclaimer: The selected articles and chapters were not evaluated for their research methods and do not necessarily reflect the views of the AI & Human Rights Blog

‘Thirty years of Artificial Intelligence and Law: overviews’, Michał Araszkiewicz, Trevor Bench-Capon, Enrico Francesconi, Marc Lauritsen and Antonino Rotolo, Artificial Intelligence and Law

‘Should Using an AI Text Generator to Produce Academic Writing Be Plagiarism?’, Brian L Frye, Fordham Intellectual Property, Media & Entertainment Law Journal (forthcoming)

‘Alexa! Examine privacy perception and acceptance of voice-based artificial intelligence among digital natives’, Mehak Mittal and Sanjay Manocha, Journal of Information and Optimization Sciences

‘China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice’, Guangyu Qiao-Franco and Rongsheng Zhu, Journal of Contemporary China

‘What would the matrix do?: a systematic review of K-12 AI learning contexts and learner-interface interactions’, Robert L Moore, Shiyan Jiang and Brian Abramowitz, Journal of Research on Technology in Education

‘How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability’, S Mo Jones-Jang and Yong Jin Park, Journal of Computer-Mediated Communication

Subscribe to our weekly newsletter