Friday Apr 10, 2026

Deepfakes: the media’s new challenge

Recorded at the Battle of Ideas festival 2025 on Saturday 18 October at Church House and the Abbey Centre, Westminster.

ORIGINAL INTRODUCTION

The proliferation of Al-generated imagery now means our social-media feeds are filled with the likes of popes in padded jackets, Studio Ghibli memes and lots of fat cats. While these are likely to make you chuckle, there is a darker side to this kind of imagery, which includes deepfakes. Defined as any image or recording that has been ‘convincingly altered and manipulated to misrepresent someone’ – often maliciously – some have predicted that eight million will be shared by the time 2025 is done, up from 500,000 in 2023.

Recent elections around the globe pointed to the power of these creations to move the dial. There was Biden apparently telling Democrats not to vote and the digital resurrection of long-time dictator Suharto in Indonesia, essentially endorsing his son-in-law, who went on to win. The AI-generated Fake Pentagon Explosion image a couple of years ago, depicting a massive explosion and smoke plume near the Pentagon, went viral, was shared by verified accounts and financial news outlets, and sparked panic in the markets.

More recently, AI-generated or manipulated visuals have proliferated during the Israel-Hamas conflict since October 2023, often amplifying emotional narratives to fuel propaganda. These fakes have been shared millions of times on social media, exploiting the war’s real horrors to inflame tensions, manipulate public opinion and, in turn, eroding trust in real evidence.

With many news organisations increasingly looking to social-media platforms as a way of newsgathering, deepfakes represent a novel challenge for verification, especially when trying to beat rivals to publish in an always-on ecosystem. Alongside AI-generated ‘facts’ and ‘quotes’ that are fictitious, are journalists likely to inadvertently share manipulated misinformation?

So what can be done? Part of the solution may lie with journalist’s ability to parse what is real and not. But as the technology improves and becomes more effective, digital tools – including Al – will be needed. But while we wait for more sophisticated technical solutions, some worry the threat of deepfakes will be yet another excuse for online censorship. And anyway, as journalists are professionally trained to check sources, do they need to maintain standards in being less credulous and more sceptical?

But more broadly, with trust levels in the news media for many already at rock bottom, is the threat that deepfakes pose really a novel one, especially when more old-fashioned, staged-for-the-camera reactions to Israeli attacks in Gaza are commonplace online? Are deepfakes just the latest problem plaguing an industry struggling with confirmation bias, tribalism and mistrust?

SPEAKERS
Liam Deacon
communications and campaigns consultant, Pagefield Communications; former journalist; former head of press, Brexit Party

Jenny Holland
writer and critic; former assistant, New York Times; author, Saving Culture (from itself) Substack

Jacob Mchangama
executive director, The Future of Free Speech

CHAIR
Max Sanderson
assistant managing editor, Guardian

Comment (0)

No comments yet. Be the first to say something!

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20241125