27 March 2023

Can’t believe its 27 March. This year has gone way too quickly.

You may have seen some of the fake images generated over the last few weeks, including this really incredible one of the Pope, it seems like quite a big jump in quality over the last little while, something covered in the Washington Post. Worrying as this is Bloomberg then have a piece on how video may be the next frontier.

The Guardian has another spin around the AI/bias roundabout, with a story on how AI hiring algorithms could lead to, or solve, bias in recruitment…

WIRED has an interesting piece on learning from public health to guide AI regulation, although a lot of the issues raised – propaganda warning – may be addressed through a human rights based approach to AI.

Speaking of propaganda I’ve co-authored a piece on the chilling effect of surveillance, linked below (open access). This is the first in a series of pieces looking at the chilling effect of surveillance, and what it means for human rights protections. I think this is really exciting work, and something that will hopefully play a role in properly examining the ‘potential harm’ side of the equation vis-a-vis AI impact assessments.

Hope ye all have a great week, thanks as ever to Sarah Zarmsky, and since its Monday morning, I’ll leave ye with ‘On a Monday Morning‘.

AI’s Powers of Political Persuasion, HAI at Stanford University

Fake images of Trump arrest show ‘giant step’ for AI’s disruptive power, The Washington Post

Chatbots, deepfakes, and voice clones: AI deception for sale, FTC 

What will AI regulation look like for businesses?, AI News

Generative AI’s Next Frontier Is Video, Bloomberg

Hoping for the Best as AI Evolves, Communications of the ACM

AI in the Public Interest: Education and Democracy, Communications of the ACM

These new tools let you see for yourself how biased AI image models are, MIT Technology Review

Art(ificial intelligence) imitates life: IP infringement risks presented by Generative AI, DLA Piper

Robot recruiters: can bias be banished from AI hiring?, The Guardian

AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’, The Guardian

OpenAI rolls out ChatGPT plugins, granting iffy language model access to your apps, The Register

To Hold Tech Accountable, Look to Public Health, WIRED

AI isn’t magic or evil. Here’s how to spot AI myths., The Washington Post

A.I. Can’t Write My Cat Story Because It Hasn’t Felt What I Feel, The New York Times

Who Will Take Care of Italy’s Older People? Robots, Maybe., The New York Times

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’, The Guardian

AI tools are generating convincing misinformation. Engaging with them means being on high alert, The Conversation

The A.I. Chatbots Have Arrived. Time to Talk to Your Kids., The New York Times

A US Agency Rejected Face Recognition—and Landed in Big Trouble, WIRED

TechScape: The AI tools that will write our emails, attend our meetings – and change our lives, The Guardian

Hospital to test AI ‘copilot’ for doctors that jots notes on patient care, The Register

Google and Microsoft are bringing AI to Word, Excel, Gmail and more. It could boost productivity for us – and cybercriminals, The Conversation

French parliament says oui to AI surveillance for 2024 Paris Olympics, The Register

Journal Articles 

*Disclaimer: The selected articles and chapters were not evaluated for their research methods and do not necessarily reflect the views of the AI & Human Rights Blog

Can AI infringe moral rights of authors and should we do anything about it? An Australian perspective, Rita Matulionyte, Law, Innovation and Technology

‘I started seeing shadows everywhere’: The diverse chilling effects of surveillance in Zimbabwe, Amy Stevens, Pete Fussey, Daragh Murray, Kuda Hove and Otto Sake, Big Data & Society

Subscribe to our weekly newsletter