Philosopher: Deepfakes less of a threat than feared
University of Vienna philosopher Keith Raymond Harris offers some thoughts at Psyche about the issues around cracking down on deepfakes:
… a video broadcast by a reputable news organisation is powerful evidence for the real occurrence of the events it purports to depict. Video footage shared by an anonymous or pseudonymous account on social media, without the backing of more reputable sources, provides limited evidence – even if it appears realistic. In this sense, the evidential force of video footage is like that of testimony. Testimony from a source who is known to be both honest and competent on the topic is a powerful form of evidence. Testimony from one who is either dishonest or incompetent, or indeed one whose reliability is unknown, amounts to relatively weak evidence.
“Why deepfakes pose less of a threat than many predict,” October 24, 2024

Misinformation, Content Moderation,
and Epistemology (Routledge 2024)
Sadly, it’s not quite that simple
Recently, CBS, a “a reputable news organisation ,” appears to have stealth edited US presidential candidate Kamala Harris’s answer to a question and Fox, also “a reputable news organisation,” apparently edited Trump. Avoiding deception isn’t and never has been — or could have been — as simple as trusting reputable sources.
About deepfakes, Harris is surely on firmer ground when he notes that video evidence was once a novelty and naturally came to seem more “real.” But it is not, in principle, any more or less likely to be manipulated than photographic evidence (photoshopping) or written accounts (distortion):
… while video recordings once conferred perceptual knowledge, they can now confer only something like testimonial knowledge. In less technical terms, watching video footage of an event was once a way of witnessing that event unfold, but is now more akin to being told that the event occurred. But this is an oversimplification. Deepfakes offer new ways of manipulating videos, but videos have always been manipulable. The term ‘cheap fakes’ was coined by Britt Paris and Joan Donovan to describe a range of relatively low-tech techniques for manipulating videos, including altering the speed, using lookalikes, and misrepresenting the surrounding context. The possibility of manipulating video footage in these ways is inevitable and ever-present, and thus, even before the emergence of deepfakes, video footage never spoke entirely for itself. Instead, assessing the evidential import of video footage has always required attention to the surrounding social context. Less of a threat
We do not see panic legislation promoted around photoshopping or fictionalized accounts these days, perhaps because we are used to the risks.
But panic over deepfakes may be self-limiting
When California governor Gavin Newsom outlawed political deepfakes, he discovered the same problem as a politician who tried that strategy for editorial cartoons over a century ago. It blew up in his face, with targeted mockery. But no one was deceived. As we become familiar with deepfakes, the usual rule will kick in: If it sounds unbelievable, don’t believe it.
Do you believe he authorized this?