Greetings, and welcome to February! It is quite nice to have January out of the way and to see some blue skies over London – although it is still Baltic. This week the ChatGPT chat continues, as it likely will for a while. Although, I am quite keen on the idea of automated note taking (and maybe even to-do lists?!). Also an interesting piece AI oversight by Algorithm Watch.
The WIRED story on facial recognition technology in stadiums (and why we may need to accept it?) reminds me of a conversation I had last week. In the UK (and maybe where you live?), privately operated facial recognition is becoming increasingly common both across really significant public spaces (or privately owned public spaces) such as Kings Cross or the new Tottenham Court road development, and in shops, including at the automated till as you pay. But there is no explanation given as to why this is deemed necessary or even useful? This really seems like something for the Information Commissioner – there may be some potential benefit to the company but there is a significant privacy invasion, with no real possibility of withdrawing consent. It ties into something I’m working on around a human rights due diligence framework – where the claimed utility and potential harm of a tool should be evaluated, drawing on an appropriate evidence base, prior to deployment – as it really raises the question as to what the company hopes to gain? Do they engage with the UN Guiding Principles on Business and Human Rights? Is facial recognition in effect allowing surveillance capitalism to move offline?
I am unaware of any complaint to the Information Commissioner here in the UK, but would be really interested to know if there are any. The only similar investigation I can recall is into a shopping centre in Manchester a few years ago, where the police were also found to be utilising this surveillance capability, ‘discreetly’.
Anyways, hope ye all have a lovely week. If ye are looking for some aural respite from the gloom, this album has been helping me recently, and is on constant rotation: E2-E4 (Mixed) by Manuel Göttsching.
Thanks to Sarah Zarmsky, who also has an interesting symposium on fairness, equality and diversity in open source research that you may be interested in, over at Opinio Juris.
‘Beyond ChatGPT: The very near future of machine learning and early childhood’, Centre of Excellence for the Digital Child
‘Inside a radical new project to democratize AI’, MIT Technology Review
‘How the Netherlands Is Taming Big Tech’, The New York Times
‘How AI is improving agriculture sustainability in India’, The Keyword by Google
‘AI models spit out photos of real people and copyrighted images’, MIT Technology Review
‘How Prejudice Creeps into AI Systems’, Towards Data Science
‘Does AI Have Political Opinions?’, Towards Data Science
‘How does AI see your country?’, Towards Data Science
‘Mass-market military drones have changed the way wars are fought’, MIT Technology Review
‘It Was Smart for an AI’, Lawfare
*Disclaimer: The selected articles and chapters were not evaluated for their research methods and do not necessarily reflect the views of the AI & Human Rights Blog
‘Fragmentation and the Future: Investigating Architectures for International AI Governance’, Peter Cihon, Matthijs M. Maas, and Luke Kemp, Global Policy
‘An interdisciplinary approach to artificial intelligence in agriculture’, Mark Ryan, Gohar Isakhanyan, and Bedir Tekinerdogan, NJAS
‘AI in education: learner choice and fundamental rights’, Bettina Berendt, Allison Littlejohn, and Mike Blakemore, Learning, Media and Technology
‘The securitization of the EU’s digital tech regulation’, Daniel Mügge, Journal of European Public Policy