Mind Matters Natural and Artificial Intelligence News and Analysis
money-making-machine-printing-fake-counterfeit-dollar-bills-generative-ai-stockpack-adobe-stock
Money making machine printing fake counterfeit dollar bills. Generative AI.

If Information Is Wealth, Are Deepfakes a Form of Counterfeiting?

The current tech media overdose on panic over deepfakes. They could be drowning out practical ways of fighting back
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Deepfakes, we are told, are set to overwhelm us. Faked up images of real people doing scandalous things are, it is true, increasingly hard to detect. And of course robocall deepfake voices of Joe Biden or Donald Trump are a source of concern in a U.S. election year.

And then there’s the panic. Wired Magazine, for one, has been an especially prolific source of deepfakes panic: “Deepfake porn is prolific. The explicit, AI-generated images of Taylor Swift that circulated on X this week are taking the issue to new heights.” (Wired) “The Biden Deepfake Robocall Is Only the Beginning” “Targeted, AI-generated political misinformation is already out there—and humans are falling for it.”

Wired writers thus add to the chorus of voices demanding that the government do something about deepfakes. The trouble is, government didn’t create deepfakes. It probably doesn’t know what to do about them. Legislation could help but — if poorly thought out and created in a panic — it could just make things worse.

So let’s step back from the panic for a moment.

First, is there a historical precedent for deepfakes?

The concept of the global economic crisis. Illegal production of US dollars. Print money underground. Printing hundred dollar bills by a criminal.

Yes there is. It is called counterfeit money. From time immemorial, people have tried to make fake currency look like the real thing, with varying degrees of success. But their efforts never overwhelmed actual currency nor did they prevent the worldwide adoption of currency as a convenient medium of exchange. If information is wealth, as we are often told, deepfakes are a form of counterfeiting. They steal the value from the rightful owners.

The history of counterfeiting forms of wealth is fascinating, even though people have usually found ways of distinguishing the real from the fake. Isaac Newton (1642–1747), for one, was a famous foe of counterfeiters. Issuers of legal tender (lawful currency) kept building in elements over the millennia that have made their products harder to fake.

Could ways to determine deepfakes also be built in?

Artists are already fighting back against generative AI’s wholesale grab of copyright images by the use of data poisoning. A tool like Nightshade, embedded in an online work, “enables artists to change pixels in a way that the human eye cannot see but the AI can, with monstrous results. The underlying assumption is that manipulators of pirated images will lose interest if, say, a ‘high school sweetheart’ prompt keeps generating images of a dire wolf — and it takes the programmers hours to even figure out just where the corrupted data is.” (Mind Matters News)

Will the generative AI programmers figure out a way around that one? Probably. But then the artists will develop another poison pill. Just as with counterfeiting money, where methods on both sides constantly changed over the millennia.

Intel, for example, has developed (and will need to continue to update, of course) a deepfake detection tool:

Instead of looking for artifacts of fakery, the technology focuses on detecting what’s real, such as heart rate. Using a technique called photoplethysmography — the detection system analyzes color changes in the veins due to oxygen content, which is computationally visible — the technology can detect if a personality is a real human or synthetic.

Dan Patterson, “Real-time deepfake detection: How Intel Labs uses AI to fight misinformation,” ZDNet, July 16, 2023

There have also been recent advances in watermarking of text, using Unicode, as discussed in a (much more solution-oriented) article at Wired.

As things unfold, developers are likely to find a reliably ongoing market for deepfake detection.

What about deepfakes from Sora, the powerful new text-to-video creator?

Newly released Sora can create realistic video from text instructions:

Anticipating that the world won’t be grateful for yet more faked-up flimflam, OpenAI, which launched Soros, is also offering detection tools:

OpenAi is “building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora,” the company said. “We plan to include C2PA [tamper-evident] metadata in the future if we deploy the model in an OpenAI product.” …

The disclosure comes a day after OpenAI and Microsoft released a report warning that that cybercriminals are in fact using GenAI tools powered by large language models (LLMs), including OpenAI’s hugely popular ChatGPT, to boost their attacks.

Kyle Alspach, “OpenAI: We’ll Help You Detect Videos Made With Sora GenAI Tool,” CRN, February 15, 2024

And if those tools don’t work, they will need to develop new ones.

Anyway, life is complex. Some entrepreneurs, we learn, really like deepfakes: “Edo Segal, founder and CEO of Touchcast, believes that digital avatars could be a new way of presenting and interacting with content. His company has developed a software platform called Genything that will allow anyone to create their own digital twin.” (Wired, February 15, 2024) Well, digital twins might work for some people, just as Monopoly money works in a game — as long as we can all agree that the fake stuff is just a game.

You may also wish to read: Artists strike back!: New tool “poisons” images pirated by AI. Nightshade, developed by University of Chicago computer science prof Ben Zhao, makes the AI generator keep giving you a cat when you ask for a dog. Overall, Nightshade may prove more useful than lawsuits to artists. It is embedded in pixels, visible only to AI, not to humans.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Immortal Mind: A Neurosurgeon’s Case for the Existence of the Soul (Worthy, 2025). She received her degree in honors English language and literature.

If Information Is Wealth, Are Deepfakes a Form of Counterfeiting?