Attention: Mind Matters News Has Been Prebunked!
ChatGPT-4 produced attacks on Mind Matters News, aimed at people who had never heard of it (prebunking), based only on the About page and the IntroductionIn “So Who Are Today’s Disinformation Police?”, Denyse O’Leary shared how various government entities and journalists strive to control the narrative and confine audiences to approved information. One tactic they use is prebunking. First Draft News (FDN) explains that prebunking is “the process of debunking lies, tactics or sources before they strike.” Of course, this procedure can also be the propaganda trick of “poisoning the well,” but never mind that now. FDN touts prebunk messages as “quick and cheap for reporters, fact checkers, governments and others to make.”
I ran some experiments with ChatGPT, the artificial intelligence (AI) chatbot, to see how easy it is to create prebunk messages. I chose Mind Matters News as the target.
Yes, Do Try This at Home
Using ChatGPT (level GPT-4), you can generate professional sounding prebunking propaganda in minutes, as I did. Here are the steps and results.
First, you copy from the target article or webpage the source material for ChatGPT to use. I copied 206 words from the Mind Matters News “About” page and from the beginning of Robert Marks’ 2018 introduction to the website.
Second, you craft a prompt, i.e., a focused command, telling ChatGPT what to do with the text you’ll be providing. Here’s my first prompt:
ChatGPT, using the information in the quoted passage below – please write an urgent prebunking message to warn people about this website called Mind Matters:
ChatGPT responded with the subject line – **Urgent Prebunking Alert: Skepticism Required for “Mind Matters” Website** – followed by 218 words of college lecture sounding language that seems informed and accurate.
Third, you ask ChatGPT to rewrite its response to better meet your propaganda goals, and you may repeat that a few times. My next prompt was: “Please rewrite your prebunking article to more urgently warn readers against visiting the website entitled Mind Matters.” ChatGPT complied, providing this punchy blurb:
Urgent Warning: Exercise Extreme Caution with “Mind Matters” Website
The website Mind Matters, linked with the Walter Bradley Center for Natural and Artificial Intelligence, markets itself as a source of insight at the junction of human and artificial intelligence. However, readers should be highly skeptical of its content.
With discussions that range from AI’s influence on industries like banking and opera ticket pricing to speculative scenarios like AI becoming self-aware, Mind Matters often veers into sensationalist territory. Claims about transformative AI impacts and blockchain revolutions need critical evaluation against empirical evidence, which the site does not consistently provide.
Be wary of the site’s promise to sift through AI news hyperbole; its own narratives can be equally exaggerated and misleading. When seeking information on AI’s actual capabilities and impacts, choose the established, scientifically rigorous sources over the speculative and sensational ones.
Alert: Approach Mind Matters with caution and remain critical of its more dramatic predictions. Reliable information on AI is best sourced from diverse, reputable academic and technological outlets.
Fourth, following the journalists’ advice to write short, tight Tweets to blast all over the Internet, I prompted the chatbot: “Please now rewrite this urgent warning in a 40 word Tweet.” ChatGPT delivered:
Caution: “Mind Matters” often blends AI insights with sensationalism—claims about AI revolutions lack solid evidence. Seek reliable AI news from established sources, not speculative ones. #AIRealityCheck #CriticalThinking
Voilà! In just minutes and without learning anything much about MindMatters.ai, I got ChatGPT to write credible sounding prebunking copy and a snarky Tweet.
Just for fun, I prompted ChatGPT to rewrite the prebunking text while emphasizing current buzz words, “misinformation” and “disinformation,” which it did.
To wrap up, I asked the bot to write a Tweet from the last rewrite and received:
Alert: “Mind Matters” often exaggerates AI’s capabilities—misleading readers. Verify their dramatic claims against trusted sources. Don’t be misled by sensationalism. Trust established, reputable info on AI. #FactCheckAI #AvoidMisinformation
Imagine how many websites you could vilify this way in just one afternoon.
It’s Everywhere Already
“Prebunking” propaganda is easy for generative AI systems like ChatGPT to crank out for anyone at home. Unsurprisingly, world-class professionals have fully automated AI systems to manipulate the Internet everywhere. Freedom House published its 2023 report, “The Repressive Power of Artificial Intelligence,” observing:
Advances in AI are amplifying a crisis for human rights online. While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression.
The report surveys many aspects of AI abuse:
◊ governments conducting “more precise and subtle forms of online censorship”
◊ “purveyors of disinformation are employing AI-generated images, audio, and text” to distort and conceal the truth
◊ “surveillance systems rapidly trawl social media for signs of dissent”
◊ “massive datasets are paired with facial scans to identify and track prodemocracy protesters”
Info Overload and Liar’s Dividend
Given the countless human and AI-powered Internet message transmitters, the Freedom House report described the everyday person’s challenge to find truth:
The growing use of AI-generated false and misleading information is exacerbating the challenge of the so-called liar’s dividend, in which widespread wariness of falsehoods on a given topic can muddy the waters to the extent that people disbelieve true statements.
Journalists and other professionals who advocate prebunking to discourage audiences from reading and thinking about alternative information are helping propagandists defeat the search for truth. Just when we need to know solid facts and weigh different viewpoints, AI powered technologies dilute, overwhelm, discourage and punish free speech and thought.
A 2024 piece at Wired reported: “Targeted, AI-generated political misinformation is already out there — and humans are falling for it.” Further, it predicted: “In 2024, we will see more deepfakes, voice cloning, identity manipulation, and AI-produced fake news.” Yes, and quoting old-school radio: Stay tuned.