25 October 2023

Greetings all, and I greet you with cautious optimism that the issues plaguing delivery of this newsletter may (touch wood!) have been resolved. We will see. If you do manage to read this, just to flag that all previous weeks are available online. So hopefully you won’t have missed out on too much.

We are also trying to diversify our sources, to bring you as broad a range of stories as possible, so if you have any suggestions for outlets we should monitor, please do get in touch. Also, we have a new ‘media’ (videos, podcasts) section, the brainchild of Sarah Zarmsky, and a welcome addition.

Now on to the newsletter, and I think it only important that we kick off with the big issues, and something I know has probably been keeping you all awake at night: What the $£&! to do if your smart fridge takes over? Well, worry no more, because Aberdeen is hosting a webinar on the legal issues involved. In fairness, its a really interesting topic, that raises a lot of legal issues, the framing/blurb is, however, I think gold.

There is an interesting (read provocative?) in the Harvard Mis/Information Review arguing that concerns linked to generative AI and misinformation may be overblown. Which to me, seems a little odd, they may be more speculative than real at the moment but – given the potential for massive worldwide harm – it seems like a really important research agenda. The claim that we have well established media institutions in which people can trust also seems like it might be less certain than as presented, given the, eh, ever increasing reality of misinformation and distrust in ‘the mainstream media’.

The Guardian has a piece on the UK government’s ever increasing use of ‘AI’ systems. The lack of transparency around such deployments is really problematic. Not to mention the fact that the systems don’t appear to be working that well. I wonder when the legal challenges to social welfare systems (and retrospective facial recognition systems) are coming.

WIRED have a story on the human impact of deepfake porn, which is really quite upsetting, but is worth a read.

There’s also quite a lot of governance content, but worth flagging is the decision that the Information Commissioner’s Office has no jurisdiction over Clearview AI (the facial recognition tool). From memory – something I read on the ‘internet’ I think – this has also prompted a change in CLearview’s policies.

Now, I hope you’re all sitting down for this bit. It totally knocked me for six. And really just shattered my faith in what might be real, or even possible, in this world. MI5 and the FBI are warning that apparently ‘terrorists’ might try to misuse AI. What is the world coming to? Thank god we have intelligence agencies to flag future risks…

Hope you all have a good week, thanks as ever to Sarah Zarmsky, and I’ll leave you this week with ‘Everyday People‘ by Sly & the Family Stone. Cos its just so uplifting!

Uyghur Human Rights Project, Hikvision and Dahua Facilitating Genocidal Crimes in East Turkistan

Harvard Kennedy School Misinformation Review, Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown

Axios, ChatGPT and Midjourney bring back the dead with generative AI   

Tech Policy, Regulating Transparency in Audiovisual Generative AI: How Legislators Can Center Human Rights 

Robotics & AI Law Society, Why Generative AI Is Not Cyrano de Bergerac 

Nature Machine Intelligence, AI Reality Check

The Guardian, UK officials use AI to decide on issues from benefits to marriage licences 

Centre for the Governance of AI, The Case for Including the Global South in AI Governance Discussions 

Stanford News, New dog, old tricks: New AI approach yields ‘athletically intelligent’ robotic dog 

Inside Higher Ed, Q&A: Tackling the role of a university’s first AI officer 

Deepmind, Evaluating social and ethical risks from generative AI 

AI News, Enterprises struggle to address generative AI’s security implications

News Wise, Study finds that AI benefits workers more than bosses

MIT Technology Review, How Meta and AI companies recruited striking actors to train AI

MIT Technology Review, China has a new plan for judging the safety of generative AI—and it’s packed with details

ASH Center, Who’s accountable for AI usage in digital campaign ads? Right now, no one.

Tech Policy, We Need A Policy Agenda for Rural AI

Business Day, This is how AI recruitment systems keep discrimination alive

WIRED, Britain’s Big AI Summit Is a Doom-Obsessed Mess

Financial Times, South Korean Christians turn to AI for prayer

WIRED, Putting a Real Face on Deepfake Porn   

The Guardian, Terrorists could try to exploit artificial intelligence, MI5 and FBI chiefs warn

WIRED, DeepMind Wants to Use AI to Solve the Climate Crisis

The Guardian, AI chatbots could help plan bioweapon attacks, report finds

Financial Times, New tech is both a threat and a benefit for women’s access to work

Big Brother Watch, Big Brother Watch response to Met Police using facial recognition in shops

Business & Human Rights Resource Centre, Researchers find that generative AI tools allegedly have a ‘US’ bias

The National Interest, Technology and the End of the Russia-Ukrainian War   

Smithsonian Magazine, This 21-Year-Old Used A.I. to Decipher Text From a Scroll That Hasn’t Been Read in 2,000 Years

WIRED, A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning

The Register, UK tribunal agrees with Clearview AI – Brit data regulator has no jurisdiction

The Guardian, ‘The potential to undermine democracy’: European publishing trade bodies call for action on generative AI        

Blog Posts

Just Security, The Tragedy of AI Governance 

Just Security, The Path to War is Paved with Obscure Intentions: Signaling and Perception in the Era of AI

Just Security, DHS Must Evaluate and Overhaul its Flawed Automated Systems  

The Conversation, Rancid food smells and tastes gross − AI tools may help scientists prevent that spoilage 

Academic Literature

*Disclaimer: The following articles, chapters, and books have not been evaluated for their methodology and do not necessarily reflect the views of the AI & Human Right Blog 

Lisa Hohensinn, Jurgen Willems, Meikel Soliman, Dieter Vanderelst and Jonathan Stoll, Who guards the guards with AI-driven robots? The ethicalness and cognitive neutralization of police violence following AI-robot advice

Jake Okechukwu Effoduh, Ugochukwu Ejike Akpudo and Jude Dzevela Kong, Towards an Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa 

Moses Jolaoso, Balancing Innovation and Environmental Sustainability: The Significance of Artificial Intelligence (AI) In Addressing Climate Change

Events

UNESCO, Technology Facilitated Gender-Based Violence in Times of Generative AI, 13 November 2023

University of Aberdeen School of Law, AI and International Contracts: What law applies if your “smart fridge” takes over…? 

Media

The New York Times (Video), How Israeli Civilians Are Using A.I. to Help Identify Victims 

The Guardian (Podcast), Could AI help diagnose schizophrenia?

ENDS

Subscribe to our weekly newsletter