Greetings all, and I greet you with cautious optimism that the issues plaguing delivery of this newsletter may (touch wood!) have been resolved. We will see. If you do manage to read this, just to flag that all previous weeks are available online. So hopefully you won’t have missed out on too much.
We are also trying to diversify our sources, to bring you as broad a range of stories as possible, so if you have any suggestions for outlets we should monitor, please do get in touch. Also, we have a new ‘media’ (videos, podcasts) section, the brainchild of Sarah Zarmsky, and a welcome addition.
Now on to the newsletter, and I think it only important that we kick off with the big issues, and something I know has probably been keeping you all awake at night: What the $£&! to do if your smart fridge takes over? Well, worry no more, because Aberdeen is hosting a webinar on the legal issues involved. In fairness, its a really interesting topic, that raises a lot of legal issues, the framing/blurb is, however, I think gold.
There is an interesting (read provocative?) in the Harvard Mis/Information Review arguing that concerns linked to generative AI and misinformation may be overblown. Which to me, seems a little odd, they may be more speculative than real at the moment but – given the potential for massive worldwide harm – it seems like a really important research agenda. The claim that we have well established media institutions in which people can trust also seems like it might be less certain than as presented, given the, eh, ever increasing reality of misinformation and distrust in ‘the mainstream media’.
The Guardian has a piece on the UK government’s ever increasing use of ‘AI’ systems. The lack of transparency around such deployments is really problematic. Not to mention the fact that the systems don’t appear to be working that well. I wonder when the legal challenges to social welfare systems (and retrospective facial recognition systems) are coming.
WIRED have a story on the human impact of deepfake porn, which is really quite upsetting, but is worth a read.
There’s also quite a lot of governance content, but worth flagging is the decision that the Information Commissioner’s Office has no jurisdiction over Clearview AI (the facial recognition tool). From memory – something I read on the ‘internet’ I think – this has also prompted a change in CLearview’s policies.
Now, I hope you’re all sitting down for this bit. It totally knocked me for six. And really just shattered my faith in what might be real, or even possible, in this world. MI5 and the FBI are warning that apparently ‘terrorists’ might try to misuse AI. What is the world coming to? Thank god we have intelligence agencies to flag future risks…
Hope you all have a good week, thanks as ever to Sarah Zarmsky, and I’ll leave you this week with ‘Everyday People‘ by Sly & the Family Stone. Cos its just so uplifting!
Uyghur Human Rights Project, Hikvision and Dahua Facilitating Genocidal Crimes in East Turkistan
Harvard Kennedy School Misinformation Review, Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown
Robotics & AI Law Society, Why Generative AI Is Not Cyrano de Bergerac
Nature Machine Intelligence, AI Reality Check
Centre for the Governance of AI, The Case for Including the Global South in AI Governance Discussions
Inside Higher Ed, Q&A: Tackling the role of a university’s first AI officer
MIT Technology Review, How Meta and AI companies recruited striking actors to train AI
MIT Technology Review, China has a new plan for judging the safety of generative AI—and it’s packed with details
Tech Policy, We Need A Policy Agenda for Rural AI
Financial Times, South Korean Christians turn to AI for prayer
Financial Times, New tech is both a threat and a benefit for women’s access to work
Big Brother Watch, Big Brother Watch response to Met Police using facial recognition in shops
Business & Human Rights Resource Centre, Researchers find that generative AI tools allegedly have a ‘US’ bias
The National Interest, Technology and the End of the Russia-Ukrainian War
Just Security, The Tragedy of AI Governance
Just Security, DHS Must Evaluate and Overhaul its Flawed Automated Systems
*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog
Lisa Hohensinn, Jurgen Willems, Meikel Soliman, Dieter Vanderelst and Jonathan Stoll, Who guards the guards with AI-driven robots? The ethicalness and cognitive neutralization of police violence following AI-robot advice
Jake Okechukwu Effoduh, Ugochukwu Ejike Akpudo and Jude Dzevela Kong, Towards an Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa
UNESCO, Technology Facilitated Gender-Based Violence in Times of Generative AI, 13 November 2023
University of Aberdeen School of Law, AI and International Contracts: What law applies if your “smart fridge” takes over…?
The New York Times (Video), How Israeli Civilians Are Using A.I. to Help Identify Victims
The Guardian (Podcast), Could AI help diagnose schizophrenia?