Greetings, happy new year, and welcome back to the AI & Human Rights newsletter!
We have quite a big newsletter this week, given the length of the break, although we took a holiday too, so make no claims to this being an exhaustive (although potentially exhausting) overview of the holiday period.
The AI & human rights related news over the break seems to have grouped itself into three broad categories: war, facial recognition, and Chat GPT.
The role of drones in the war on Ukraine has prompted renewed questions on the role of AI in targeting, and this week there are a couple of pieces in the AP, and on the Lawfare blog. Its interesting how an active conflict can speed up development and adoption of new technologies, perhaps making them a done deal (I was going to use a French term there, but who am I kidding) and negating years of regulatory-focused discussion. Interesting, the AP reports (and I had missed this) that Turkish drones may have been used in the Libya conflict in fully autonomous mode.
For an interesting article on life in the trenches, and the role that cheap commercial drones are playing, this New Yorker article is also pretty good.
On the facial recognition front. There are more stories about facial recognition resulting in the arrest of the wrong person. This (inevitably?) seems to be on the rise, and there is another similar story in the metro, although in this case while the role of facial recognition is likely, it is not confirmed. Further underlining the impact of facial recognition on the right to freedom of assembly, there are also reports about Iran using facial recognition to identify protestors. At the project, and with Prof Pete Fussey, we are conducting extensive qualitative research into the chilling effect of surveillance, including the impact on the rights to freedom of expression and freedom of assembly, so watch this space…
On ChatGPT. Read them yourself. Its kind of the Prince Harry of the AI world at the moment.
Andrea Pin’s article on AI and the right to be ignored in public spaces is also most definitely on my reading list.
I hope 2023 is off to a happy, a healthy (and a not too grey and grim) start. Thanks, as always, to Sarah Zarmsky.
‘Responsible AI: Looking back at 2022, and to the future’, The Keyword by Google
‘NYC education department blocks ChatGPT on school devices, networks’, ChalkBeat
‘The EU wants to regulate your favorite AI tools’, MIT Technology Review
‘Inside Japan’s long experiment in automating elder care’, MIT Technology Review
‘Mass-market military drones: 10 Breakthrough Technologies 2023’, MIT Technology Review
‘Berkeley Technology Law Journal Podcast: Automatic License Plate Readers with ACLU Attorney Matt Cagle’, Berkeley Technology Law Journal
‘Social media can be polarizing. A new type of algorithm aims to change that.’, The Washington Post
‘Next in AI’s ‘gold rush’: Military, regulations and endless chatbots’, The Washington Post
‘Does a simple algorithm help against domestic violence?’, Algorithm Watch
*Disclaimer: The selected articles and chapters were not evaluated for their research methods and do not necessarily reflect the views of the AI & Human Rights Blog
‘AI, the Public Space, and the Right to Be Ignored’, Andrea Pin, forthcoming in Artificial Intelligence and Human Rights(OUP), available on SSRN
‘Six Human-Centered Artificial Intelligence Grand Challenges’, Ozlem Ozmen Garibay et al., International Journal of Human-Computer Interaction
‘Can Artificial Intelligence Infringe Copyright? Some Reflections’, Enrico Bonadio, Plamen Dinev and Luke McDonagh, Research Handbook on Intellectual Property and Artificial Intelligence
‘Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems’, Alexander Blanchard and Mariarosaria Taddeo, Journal of Military Ethics