Panic Propaganda Pushes Surrender to AI-Enhanced PowerThe hype over AI's significance makes us more vulnerable to it
Can you believe it? USA Today, the national news outlet, on May 4, 2023, declared (italics added):
It’s the end of the world as we know it: ‘Godfather of AI’ warns nation of trouble ahead.
Before digging out and playing your 1987 REM album, ask yourself: Is this headline true – and what do we do now?
The USA Today article mitigates the doom timeframe from imminent to someday in paragraph one (italics added):
One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction.
Within a day, the Arizona Republic ran an opinion piece headlined “‘Godfather of AI’ is right: If humans don’t get on top of technology, it will destroy us,” injecting more fear (italics added):
If we don’t listen to him, we could lose so much of our ability to exercise free will that it’s hard to fathom. Imagine a world where robots control everything from the money we spend and food we eat to where we can travel or worship.
Read deeper in the USA Today piece to discover a very different viewpoint from the very same person, “Godfather of AI” Geoffrey Hinton, who “acknowledged that he was dealing, to some degree, with hypotheticals.” Reportedly, Hinton said (italics added):
It’s possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows, which is why we should worry now.
Hinton’s backtrack didn’t appear in the headlines. Neither did the pushback from other experts briefly mentioned in the USA Today article:
Other computer security experts downplayed Hinton’s concerns, saying that artificial intelligence is basically an extremely sophisticated programming platform that can only go so far as humans allow it.
One quoted expert said:
I think everybody needs to take a step back here and get away from the hyperbole, … Everybody’s saying AI is going to become sentient or whatever. No, it’s not. It’s a computer. It does what you tell it to do.
The Arizona Republic piece doesn’t mention the pushback. Instead, its author raises many rhetorical “what if” questions aiming to trigger emotional fears, “Don’t ignore Geoffrey Hinton’s warning”:
There’s no end to the number of ways an unchecked digital learning environment could alter our lives. And Hinton’s warning is a call for government and tech ethics societies to get on top of AI before we’re crushed under it. Just imagine a world without free will. It’s more than possible ….
Sell Fear – Leave the Solutions to Experts
Old-school journalists knew “headlines sell newspapers.” The same applies to “click bait” online. Articles like these two fit a good definition of propaganda: “the spreading of ideas, information, or rumor for the purpose of helping or injuring an institution, a cause, or a person.”
Propaganda can supply truth, falsehoods, or a mixture. For these two articles, the bigger question is: What is their purpose? What institution or cause are they trying to help or hurt?
Neither article even hints at blocking the development of AI systems. Neither suggests ChatGPT or other bots should be dismantled. When asked about solutions, Hinton reportedly said, “It’s very important that people get together and think hard about it and see whether there is a solution. It’s not clear there is a solution.” The Arizona Republic piece urged “government and tech ethics societies to get on top of AI before we’re crushed under it.”
Both articles urge that experts in ethics committees and government agencies solve the problems that AI systems create. Predictably, the Biden White House is developing a “plan” for “ethical AI” and money is flowing. The end-of-the-world propaganda evidently encourages building institutions of designated experts and centralized power.
Follow the dots. The same people most worried about AI systems becoming globally-interconnected, ultra-powerful, and even sentient are the people calling for governments to impose ethics and restrain AI abuses. Meanwhile, the current trends in government theory seek more centralized power to direct businesses and individuals to embrace specific ideas, policies, and economic outcomes. Placing AI-control with the society-and-economy controllers can only align greater powers in the hands of the same institutions.
The UK-based EY consultants described how AI enables government to tax, regulate, dispense benefits, impose mandates, and pursue political objectives more efficiently than ever before:
- 1. [AI] allows [governments] to structure and analyze the huge amount of data they hold on citizens – and use it for social good. This means they can quantify and reduce inequalities in outcomes as well as opportunities. They can also share the data with third parties, who can create apps or services that improve life for citizens, while making sure those parties keep the data private.
- 2. [AI] gives [governments] a unique chance to drive how citizens use and benefit from these technologies. That’s because governments are also responsible for role-modeling the ethical use of AI, regulating how companies apply it and educating citizens to be ready for its challenges.
Running the world is just getting easier and more exciting for the people and institutions in power. The vector of “existential threat” fears of AI presses us from all directions, so everyday citizens naturally throw up their hands. Who can possibly understand all of that computer and robot stuff? Let the experts take care of us.
To quote Darth Vader: “The circle is now complete.” The scary, over the top propaganda bewilders people into giving up decision-making and wisdom, turning the AI concerns over to the very individuals and institutions whose worldview aims to more efficiently control people using AI. Yet AI’s taking control of people is so terrifying that governments and experts need to take control of AI.
The AI Ministry of Truth
Unburdening bureaucrats from a tedious chore, AI can easily write its own propaganda, too. I asked ChatGPT 4.0 to write a piece to persuade people to accept AI-enhanced decisions. Smoothly, the bot wrote (in part):
By integrating AI, we access vast pools of data and sophisticated algorithms, allowing us to make better-informed, unbiased choices. AI eliminates human error and cognitive biases, ensuring optimal decision-making in areas like healthcare, finance, and climate change. As a society, we should welcome this technology as it empowers us to tackle global challenges more effectively. Trust in AI-enhanced decisions means embracing progress, safeguarding our environment, and improving our quality of life.
The propaganda message is: “AI systems are smarter than humans and will make human life better for everyone, so long as the experts in power use AI properly – and those experts know how, so everything is looking rosy.” Do we sense any major flaw in that view?