Harry Styles and “The Liar’s Dividend”

The advent of AI deepfakes means we’ll waste endless hours arguing about what’s real

Clive Thompson

--

A young Harry Styles performing live, in a white shirt, in profile sideways shot from his left, with his right hand holding a microphone to his mouth and his left arm reaching out straight in front of him
“Harry Styles — One Direction” by Ianthebush (CC 2.0 license, unmodified)

Back in 2019, the legal scholars Danielle Citron and Robert Chesney defined a useful concept: The “liar’s dividend”.

They were worried about how deepfakes could create political chaos. Tools for producing realistic-but-fake video and audio were becoming cheaper and cheaper. Anyone who wanted to throw an election by making fake media about a politician could now do it, with decent realism.

Thankfully, we have not yet seen this happen in a major North American political season. Four years after Citron and Chesney wrote their article, making faked video and audio has become simpler than ever. But the most common uses of deepfakes aren’t political. They’re mostly created a) for porn — such as attempts to humiliate and demean female celebrities — and b) for scams, as with scammers deepfaking a family member’s voice.

Why haven’t political deepfakes taken off? Possibly because it still takes a bit of work to do a really high-quality political fake. More importantly, as the fake-media scholar Hany Farid told me a few years ago, one doesn’t need to create deepfakes when “shallow fakes” will work just fine. Want to make Joe Biden seem less electable? There’s no need to…

--

--

Clive Thompson
Clive Thompson

Written by Clive Thompson

I write 2X a week on tech, science, culture — and how those collide. Writer at NYT mag/Wired; author, “Coders”. @clive@saturation.social clive@clivethompson.net