Mind Matters Natural and Artificial Intelligence News and Analysis
fake news
fake news concept wirh toy robots
Image licensed via Adobe Stock

New AI Can Create—and Detect—Fake News

But how good is it at either task? We tested some copy
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

GROVER is a fake newsbot created by Fractal, a marketing agency that demonstrates its powers by writing fake bland PR copy and puff pieces at This Marketing Blog Does Not Exist. GROVER can churn out “almost-coherent” passages of text:

“This Marketing Blog Doesn’t Exist” recreates that particular species of bland PR website, complete with 900-word articles about things like “ROI scales” and “synchronicity” crapped out by AI tools that have learned to mimic online content…

The website also creates fictional authors for each post with StyleGAN, the AI tool that’s been used to generate realistic portraits and less-realistic cats and Pokémon.

Dan Robitzski, “New AI generates infinite horrible marketing sites” at Futurism
This fake blog post puffing fictional celebs is said to have been entirely AI-generated except for the title.

Another thing it can do is effectively mimic the styles of major news outlets like the Washington Post and the New York Times.

Fake news is not the only worry, and perhaps not even the major one. After all, if alarming news concerns anything you care about—and it is false—chances are, you can find that out by the same methods as traditional rumors are squelched. For example, if we are told that a major public figure has had a heart attack— and it isn’t true — the next question is, who spread the rumor and why? It could be a career-limiting move for that person.

One thing GROVER technology could do, however, is clog Google search results with masses of autobabble about people, events, and things that don’t exist—but also don’t matter enough to be the subjects of intense inquiry:

Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.

James Vincent, “Endless AI-generated spam risks clogging up Google’s search results” at The Verge

Think of it as non-stop robocalls for search engines.

But the University of Washington scientists who developed the GROVER technology claim in an open-access paper that the same technology can detect the autobabble. They argue that it is “the best tool against online propaganda.” They plan to release the tool to the public:

“We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data,” the researchers wrote in the paper. “Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy.”

Dan Robitzski, “New AI Generates Horrifyingly Plausible Fake News” at Futurism

Meanwhile, an autobabble detector called GLTR (pronounced “glitter”) focuses on the surprise element of new ideas in human copy (but not AI):

Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand…

GLTR highlights words that are statistically likely to appear after the preceding word in the text. As shown in the passage above (from Infinite Jest), the most predictable words are green; less predictable are yellow and red; and least predictable are purple. When tested on snippets of text written by OpenAI’s algorithm, it finds a lot of predictability. Genuine news articles and scientific abstracts contain more surprises.

Will Knight, “A new tool uses AI to spot text written by AI” at Technology Review

The system assigns colors to words, in terms of their probability. Jonathan Bartlett comments,

It seems to utilize Gilder’s definition of information—surprise. The model basically figures out how surprising each word is given the context. If your text is mostly green and orange, your sentence pattern is predictable/generatable, and so likely came from an AI. I’m not sure how helpful the graphs are.

The most important one is the middle one – the frac(p) histogram. If your frac(p) histogram is mostly on the far-right bar, you are probably an AI. If you have a more generous distribution, with some heavy leftward bars, you are probably a human, because you added in surprising words.”

Try it yourself here.

At Mind Matters News, we tried it, with mixed results. Bartlett suggests using a whole paragraph rather than a sentence. So here’s a short paragraph of AI-generated copy from This Marketing Blog Does Not Exist:

Lexi Flores of the pop band Defrdz also favors pink, calling the color “very nice to brighten up a photo and make it pop a little bit.” Her Instagram Stories include one that depicts her while wearing a yellow dress and champagne flute (I don’t like to see bridesmaids getting wrecked). Her other favorite filters are Meta, Veil and Rose Mist.

Middle left frac(p) histogram score: 43

Now here’s a short paragraph from one of our stories, written by a human yesterday morning:

At this point, Hoffman appears to have lost all contact with his own argument. Whatever may come of the puzzling relations between relativity and quantum mechanics (the very large vs. the very small), if he is right, we will never know more than a bipedal ape fleeing a lion on the savannah.

Middle left frac(p) histogram score: 43

As you can see, the score is about the same. But Bartlett notes that the developers admit that their system is thrown off by proper names. That’s unfortunate because news and comment copy is usually full of them.

Mind you, the fact that “Lexi Flores” is described as “wearing a yellow dress and champagne flute” would alert a reader that the first item was not written by an entity that can picture the scene.

We tried the whole autobabble article from This Marketing Blog, “What Photo Filters” are Best for Instagram Marketing?. It scored 400.

Then we tried an equivalent swatch of the article published yesterday at MMN, “Why some scientists think science is an illusion” It scored 600.

So, with enough copy, a difference was detected.

The trouble is, it would be easy to think of ways to game the system either way. We could write our copy at MMN to sound like AI. Or generate AI and throw in “human touches” that might confuse the detector. That said, it’s fun to try and the developers will doubtless build in refinements as they go.

The best way to avoid AI-generated copy is probably to avoid empty puff pieces. If you go back and read “What Photo Filters are Best for Instagram Marketing?” carefully, you will note that it gushes fatuously over fictional celebs (in a real-world setting that would include wanna-be’s, used-to-be’s, and reluctantly-on-the-way-outs as well).

What writers say about this kind of formula writing gig is, it pays the rent. Apart from flattering the reader and fueling the subjects’ ambitions, it is typographical noise. And yes, as the term “formula writing” implies, the writers make extensive use of formulas. Whether AI will cut into their business is hard to say because they could buy GROVER-type programs themselves and generate more formula copy faster. But they would be wise to read all copy over before submitting it. That “champagne flute,” for example, should have been spotted.

George Orwell predicted seventy years ago that machines could do this kind of writing. He didn’t, however, predict a free society where we can choose meaningful information instead.


Further reading on fake news:

Research Showing That Fake News Easily Fools Us Collapses A recent paper claiming that low-quality news (“fake news”) spreads as quickly on social media as accurate news has been retracted by its authors.

Who’s afraid of AI that can write the news? AI now automates formula news in business and sports. How far can it go?

If thoughts were data, machines could write. Human experience is what makes sense of the written word.

AI can write novels and screenplays better than the pros!

Featured image: Fake News/Charles Taylor, Adobe Stock


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

New AI Can Create—and Detect—Fake News