5 June 2023

Jesus, I can’t believe its the 5th of June, but welcome back to the refreshed, healthy, and nearly back on form AI & Human Rights newsletter!

A bit of a military focus in this week’s newsletter, not at all related to the fact that I’m finishing up a paper on the decision making processes surrounding the military’s deployment of AI systems…

I think its worth flagging this story that received quite a lot of attention, relating to an (apparent) virtual test run by the USAF where an autonomous drone killed its operator for getting in the way of the mission, and then when killing the operator was prohibited, it destroyed the communications tower relaying the message to the drone. The story was subsequently denied – it was a thought experiment apparently – but it’s worth a read, and here’s the original blog the story emerged from (where it was not presented as a hypothetical). There is also a story on how the US military can now control drones with voice commands, although with all the chat about brain-machine interface, just talking to a drone may be the minidisk of our time. Palantir are also in the news, in relation to the use of Generative AI in conflict (as part of their Ukraine-based path to redemption?). The National Interest has a piece on how the G7 Hiroshima may be an opportunity to tackle AI regulation.

There are also some PhD opportunities in AI and Human Rights, at Strathclyde details below.

For some fun time listening the New York Times has a list of 6 podcasts to make sense of AI that I’ll be checking out. Totally tangential shout out to my favourite podcast, Mother Country Radicals. But for some real fun time listening, here’s ‘Music Makes Me High‘ from the Avalanches.

Have a good week. As always, if you enjoy please share, and if you have suggestions, do get in touch.

Articles

The Guardian, ‘I do not think that ethical surveillance can exist’ Rumman Chowdhury on accountability in AI

Algorithm Watch, Let the Games Begin: France’s controversial olympic law legitimises automated surveillance testing at sporting events

UNESCO mobilises education ministers from around the world for a co-ordinated response to ChatGPT

The Register, Phones’ facial recog tech fooled by low-res 2D photo

DLA Piper, AI Act – The European Way Forward

Financial Times, Edtech groups insist AI is friend not foe despite warnings (I know, you’re shocked right?)

AccessNow, What you need to know about generative AI and human rights

MIT Technology Review, Suddenly, everyone wants to talk about how to regulate AI

San Diego Union Tribune, China warns of risks from artificial intelligence, calls for beefed-up national security measures

Electronic Payments International, Forced verification and AI/deepfake cases multiply at alarming rates: Sumsung

Towards Data Science, Ten Years of AI in Review

Reuters, FTC gives businesses more reasons to worry about biometric privacy

Times of India, AI chatbots may be fun, but they have a drinking problem (good title, its about environmental cost)

American Academy of Arts and Sciences, The next level of AI is approaching. Our democracy isn’t ready.

The Conversation, Automation risks creating a two-tier workforce of haves and have-nots (this actually links to a piece in the FT this weekend, I think by Tim Hartord)

The Conversation, How AI and other technologies are already disrupting the workplace

Thew New York Times, How the Shoggoth Has Come to Symbolize the State of A.I. (its an octopus apparently)

Communications of the ACM, Who Should Make the Rules that Govern AI? (I have some grant funding, just putting it out there)

AI News, Meta’s open-source speech AI models support over 1,100 languages (but good luck in West Cork)

MIT Technology Review, How to talk about AI (even if you don’t know much about AI) (this is the one we all secretly want)

MIT Technology Review, A brain implant changed her life. Then it was removed against her will.

National Interest, The G7 Summit is a chance to tackle AI regulation

WIRED, Everyone Wants to Regulate AI. No One Can Agree How

WIRED, Humanoid Robots Are Coming of Age

DefenseOne, US Military Now Has Voice Controlled Bug Drones

Business & Human Rights Resource Centre, Palantir response to the use of generative AI in conflict contexts

Open Rights Group, Resist Pre-Crime

PhD Opportunities at Strathclyde

Rethinking Human Rights Implementation in the Era of Data-Intensive Technologies: Machine-Learning Based Decision Making and the Duties of Public Authorities’

Datafied Policing Technologies, Racial Justice and Human Rights Compliance

Data-intensive Technologies in Corporate Human Rights Due Diligence: Current Regulatory Challenges and Future Prospects

Completed applications should be sent to the Centre for the Study of Human Rights Law: cshrl@strath.ac.uk. Closing date for all posts: 9 June 2023.

Subscribe to our weekly newsletter