29 January 2024

Greetings all, and a happy Wednesday! I hope ye all had a lovely Data Privacy day yesterday. I certainly did.

It seems the AI & Human Rights world has woken up after some new year’s slumber, there is a lot of news this week.

The big (massive? seismic?) news this week is obviously the release of DeepSeek, an open source AI model, that was built using far less, less fancy chips than is the norm. The New York Times has a good explainer. The price comparisons I heard were approximately $5.6 million to train DeepSeek, versus $100m to $1billion for the more ‘traditional models’. If there is anything concrete to come from this, there has to be at least some hope that AI development will have a much less horrendous environmental impact. Of course, disruption is never plain sailing, and in addition to the usual hallucinations, DeepSeek also appears to have a penchant for propaganda/censorship and some data safety concerns. (shocked gasp).

Speaking of propaganda (and seamless segueways) UNIDIR are hosting their first conference on AI, Security and Ethics. More details, including on paper submission, etc., on the links. Looks like its shaping up to be a very interesting event.

Also, on data safety concerns (oh! He’s on fire!) this is a really nice explainer on some of the privacy risks associated with AI models. I found it super helpful.

We welcome (!?) the return of the naughty facial recognition corner this week. Ireland is apparently including live facial recognition (and retrospective) in its programme for work, despite LFR previously being ruled off the table. There are, of course, concerns. I know it’s not facial recognition, but there are reports that police in the UK are trialling odour and gait tech. Is that a whiff of concern i sense? (you can thank chatGPT for that one). The naughty corner (we used to call it the bold step) is slightly crowded this week, with reports on how Microsoft depend its ties with the Israeli military to provide tech support during the Gaza war, which raises (at a minimum) some business and human rights concerns. No bold step would be complete, however, without Mark Zuckerberg. Meta announced the end of fact-checking on Facebook, Instagram, and Threads. Along with a side order of more hate speech in the name of free speech. WITNESS have a comment. Interesting to see how those oversight board lads get on at Rightscon. Oh, and Trump revoked a Biden era executive order on addressing AI risks.

Does that sound like a lot of not very good policy? Fear not! If you’re in the UK, an AI tool will now allow ministers to get a ‘vibe check’ on whether MPs will like policies. I have to say. Calling it a vibe check is a vibe all of its own.

Well thats it from me this week. I’ve a few more hours in the office, and then away for a long weekend under blue skies. Yay.


Huge thanks to Ameneh Deshiri for help with the newsletter over the last while.

I’ll leave ye this week with ‘Five Hundred Miles’, not by the Proclaimers (sorry), but by Mamman Sani. The Synth King of Niger. It’s beautiful i think. And so are you all. Have a lovely week.

The Guardian, Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war

The Guardian, AI tool can give ministers ‘vibe check’ on whether MPs will like policies 

UNIDIR, Global Conference on AI, Security and Ethics 2025 → UNIDIR

Irish Times, AI advisory group warns of potential for mass surveillance  

Big Issue, DWP’s benefit fraud crackdown blasted as ‘one of the biggest assaults on welfare in a generation’

Five things privacy experts know about AI – Ted is writing things 

Tech Policy Press, What Trump’s Return Means For Encryption

The Guardian, ‘Just the start’: X’s new AI software driving online racist abuse, experts warn

The Guardian, What does AI plan mean for NHS patient data and is there cause for concern?

Wired, Your Next AI Wearable Will Listen to Everything All the Time

Tech Crunch, ‘Free Our Feeds’ campaign aims to billionaire-proof Bluesky’s tech

Tech Crunch, OpenAI quietly revises policy doc to remove reference to ‘politically unbiased’ AI

Socialsamosa, How India’s DPDP Act could change digital campaigns

Futurism, American Psychological Association Urges FTC to Investigate AI Chatbots Claiming to Offer Therapy

Privacy International, Prosecuted for Protesting

Kpmg, AI and Privacy: A Look at Biometric Tech & Data

Politico, Zuckerberg urges Trump to stop the EU from fining US tech companies

Charleston Southern Insights, Exploring The World Of Undressed AI: A Comprehensive Guide

Witness, Meta: We need better content protections (not less) in the age of deepfakes & AI

The New York Times, She Is in Love With ChatGPT

Financial Times, Amazon races to transplant Alexa’s ‘brain’ with generative AI

Financial Times, Mistral signs AFP deal for fact-based chatbot in riposte to ‘free speech’ rivals

EFF, Mad at Meta? Don’t Let Them Collect and Monetize Your Personal Data

Washington Post, We need a Freedom of Information Act for Big Tech

The Asian Times, AI-Enabled Kamikaze Drones Start Killing Human Soldiers; Ukrainian, Russian Troops “Bear The Brunt” Of New Tech

The Guardian, Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war

Washington Post, Cheap, smart, deadly. The tech industry pitches a new way to wage war

The Guardian, Serious concerns’ about DWP’s use of AI to read correspondence from benefit claimants

Forbes, Does DeepSeek Censor Its Answers? We Asked 5 Questions On Sensitive China Topics

BBC, Be careful with DeepSeek, Australia says – so is it safe to use?

Hong Kong Free Press, ‘Let’s talk about something else’: China’s AI chatbot DeepSeek answers questions on Hong Kong, Tiananmen crackdown

The conversation, ‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others

The Guardian, AI-based automation of jobs could increase inequality in UK, report says

The Guardian, New AI tool counters health insurance denials decided by automated algorithms

Reuters, Italy regulator seeks information from DeepSeek on data protection

Voxeurop, Not-so-artificial intelligence: the human workers who power AI

Voxeurop, How AI is making it easier to exploit workers

ASPI Strategist, DeepSeek is a modern Sputnik moment for West 

iNews, How your walk and body odour could soon be used to track your every move

Rest of World, Vietnam e-commerce boom drives Viettel Post to build delivery robots

The Washington Post, Facial recognition is everywhere. What will Trump do with it?

The Register, Robots in schools, care homes next? This UK biz hopes to make that happen

The Register, French AI chatbot Lucie suspended after reality check (with a chapeaux to whoever thought of that title)

Times Union, New Vatican document offers AI guidelines from warfare to healthcare

AI and Human Rights Policy News 

EDPB, Guidelines 01/2025 on Pseudonymisation

California Department of Justice, California Attorney General’s Legal Advisory on the Application of Existing California Laws to Artificial Intelligence

ASEAN, Expanded ASEAN Guid on AI Governance and Ethics – Generative AI

The Asahi Shimbun, Government to name and shame AI misusers; no criminal charges 

Edpb, AI: Complex Algorithms and effective Data Protection Supervision, Effective Implementation of Data Subjects’ Rights

Gencat, Catalonia presents a pioneering model in Europe for developing AI solutions that respect fundamental rights

ICO, Debunking data protection myths about AI

Blogs

OpinioJuris, From Space to the Courtroom: AI Enhanced Satellite Imagery and the Future of Accountability

JustSecurity, What Just Happened: Trump’s Announcement of the Stargate AI Infrastructure Project

JustSecurity, Biden’s Cybersecurity Executive Order and What Comes Next Under Trump

Scl, The Drawbacks of International Law in Governing Artificial Intelligence

IEEE, AI Mistakes Are Very Different Than Human Mistakes, We need new security systems designed to deal with their weirdness

Edly, Artificial Intelligence in Education: Striking a Balance between Innovation & Privacy

Bloomberg law, More State Data Laws Signal Companies to Act on AI and Privacy

Privacy bee, How AI is Changing the Privacy Landscape (For Better or Worse)

MIT Technology Review, What’s next for our privacy?

OII, Pioneering new mathematical model could help protect privacy and ensure safer use of AI  

University of Cambridge, Cambridge leads governmental project to understand impact of smartphones and social media on young people

Iplocation, Balancing AI Advancements With User Data Protection and Privacy Concerns

The Berkeley Technology Law Journal, Towards Fair Employment – California’s Take on Regulating AI in Hiring

Human rights and justice must be at the heart of the upcoming Commission guidelines on the AI Act implementation

HR Reporter, The risks of using facial recognition and emotion sensing technology

Medium, AI vs. PII: Protecting Privacy in a World of Data Breaches

Computing, The AI-shaped headache for privacy professionals

Seattleschild, AI and Kids: A parent’s guide to ethical use

European Parliament, Understanding EU data protection policy

DEV, DeepSeek vs. ChatGPT and Gemini: Privacy Standards Compared

Iapp, A view from DC: The first few days of Trump’s AI and privacy agenda

Law KULeuven, Does the AI Act Adequately Allocate Responsibilities along the Value Chain for High-Risk Systems?

Podcast:

SCL Podcasts: Technology & Privacy Laws Around The World – Episode 1: Robot Judges

CommonWealth Beacon, Does AI interfere in our democracy? 

AI Ethics Now,  AI and Consciousness: The Human Condition

Video:

LSE Event, Digital cities for humans or for profit? 

Technically U, Think Before You Share – AI and Privacy

Information Lab, Unveiling the Brussels Effect (from data protection to AI)

Big Brother Watch, They’re using FACIAL RECOGNITION

Reports

CEDPO AI and Data Working Group, Fundamental Rights Impact Assessments: What are they? How do they work? 

Accenture, AI: A Declaration of Autonomy

OID, A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance

Subscribe to our weekly newsletter