Mind Matters Natural and Artificial Intelligence News and Analysis
this-illustration-aims-to-highlight-the-emerging-threat-of-deepfakes-in-political-misinformation-need-for-vigilance-in-the-face-of-advanced-disinformation-technology-stockpack-adobe-stock
This illustration aims to highlight the emerging threat of deepfakes in political misinformation need for vigilance in the face of advanced disinformation technology.
Image licensed via Adobe Stock

Will Deepfakes Be Used to “Show” Us That Computers Can Now Think?

As the deepfake technology advances, William Dembski wonders whether some AI zealots might try to “fake it till they make it,” Theranos-style
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On Friday at Evolution News, design theorist design theorist William Dembski offered some thoughts on the growing problem of deepfakes. He acknowledged something we don’t hear often, that there are harmless and even constructive uses of deepfakes: “On the positive side, they can be used in the entertainment industry to de-age actors, dub languages with lip-sync accuracy, or bring historical figures to life in educational settings.”

But he also raises a worry that is not often heard:

It would be one thing if artificial intelligence develops over time so powerfully that eventually it turns into artificial general intelligence (though this prospect is a pipe dream if my previous arguments hold). But what if instead AI is used to make it seem that AGI has been achieved — or is on the cusp of being achieved? This would be like a deceptive research scientist who claims to have made experimental discoveries worthy of a Nobel prize, only to be shown later to have fabricated the results, with the discoveries being bogus and all the research papers touting them needing to be retracted. We’ve witnessed this sort of thing in the academic world (see the case of J. Hendrik Schön in my piece “Academic Steroids: Plagiarism and Data Falsification as Performance Enhancers in Higher Education“).

William A. Dembski, “Deep Fakes and Propaganda for Artificial General Intelligence,” Evolution News, February 9, 2024

People who are absolutely convinced that artificial general intelligence is inevitable might not even see themselves as deceiving anyone if they anticipate it a bit via credible deepfakes.

But deepfakes create a whole new layer of complexity. For example, if we haven’t met him, how do we know that Dembski himself even exists? He, his supposed career, and his views could be an elaborate deepfake. Late last year, Sports Illustrated published AI-written articles complete with completely fake author photos and bios.

The future of Sports Illustrated is now in doubt but the future of deepfakes is not.

The Taylor Swift amplification effect

The whole topic started to receive much more public attention after pornographic images of celebrity singer–songwriter Taylor Swift surfaced on social media late last month:

Just to show how complex it can all become, in a 2021 Pennsylvania case, a woman was arrested and tried for creating compromising deepfakes to discredit one of her daughter’s cheerleading rivals. But, unexpectedly, experts testified that they were not deepfakes at all… But six thousand media outlets reported the story before it all collapsed. It seems, “multiple deepfake experts said the vape video was far too intricate and nuanced of a composition for even the most advanced AI models in Silicon Valley to create, let alone any tools available to a mom in suburban Pennsylvania.” (Gizmodo)

It’s reassuring to learn that experts could still distinguish deepfakes from reality. Back then anyway.

Techmag Gizmodo predicts that that the technology for deepfakes is advancing so quickly that most service providers are not really prepared for what’s to come:

OpenAI introduced watermarks to Dall-E’s images this week, both visually and embedded in a photo’s metadata. However, the company simultaneously acknowledged that this can be easily avoided by taking a screenshot. This felt less like a solution, and more like the company saying, “Oh well, at least we tried!” …

These solutions are simply not enough. The issue is that deepfake detection technology is new, and it’s not catching on as quickly as generative AI. Platforms like Meta, X, and even your phone company need to embrace deepfake detection. These companies are making headlines about all their new AI features, but what about their AI-detecting features?

Maxwell Zeff, “The AI Deepfakes Problem Is Going to Get Unstoppably Worse,” Gizmodo, February 9, 2024

Presumably, that’s what multi-million-dollar lawsuits are for. But who wants to be the plaintiff? Something more like data poisoning is needed. Stay tuned.

You may also wish to read: Artists strike back!: New tool “poisons” images pirated by AI. Nightshade, developed by University of Chicago computer science prof Ben Zhao, makes the AI generator keep giving you a cat when you ask for a dog. Overall, Nightshade may prove more useful than lawsuits to artists. It is embedded in pixels, visible only to AI, not to humans.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Will Deepfakes Be Used to “Show” Us That Computers Can Now Think?