Mind Matters Natural and Artificial Intelligence News and Analysis
vintage-woodworking-tools-on-the-workbench-stockpack-adobe-s-170974909-stockpack-adobe_stock
Vintage woodworking tools on the workbench

AI and the Destructive Lies of the Tool Trope

You've heard this, right? “Technology isn’t good or bad; it’s a tool, it’s just how you use it that matters.” False.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This article is reprinted from Doug Smith’s website for his 2021 book [Un]Intentional: How Screens Secretly Shape Your Desires, and How You Can Break Free.

TL;DR

We excuse our embrace of every shiny new Big Tech product with the tool trope: “technology isn’t good or bad; it’s a tool, it’s just how you use it that matters.”

The tool trope is false. And especially in today’s popular LLM AIs, it’s deceiving, dominating, and dehumanizing.

Read on to see why.

Perspective

An empty document is intimidating. It’s uncomfortable, even for an experienced writer. The resistance is real.

OpenAI, Google, Meta, Amazon, and their ilk would ease our discomfort with their quick and “efficient” artificial intelligence (AI). They’d salve our writer’s block by offering “ideas” formed by an algorithm. Time saving. Efficient.

“ChatGPT, write a 2,000-word article about how ChatGPT dehumanizes people.”

No way. Not me. Because I know what is lost: my humanity, and yours. I’d amputate my ability to listen to my heart and mind, and lose my discernment of the precise words to convey my message. Human creativity is decimated.

Because AIs don’t have ideas. They just essentially flip billions of coins to guess the next word.

Are AIs efficient? Sure. But efficient towards what end?

Are they actually intelligent? Nope. Are they Spirit-led? Never.

The Tool Trope

We live in an era of techno-ignorance. We blindly accept everything that Big Tech decides to force on us through their worldwide monopoly power.

“Artificial Intelligence” is the propaganda-laden term given to the Large Language Model (LLM) tech popularized by behemoths like OpenAI’s ChatGPT, Microsoft’s CoPilot, and Google’s Gemini. The ability to “chat” with an AI is being added to nearly every app, device, and operating system, with Apple’s offering presumptuously called “Apple Intelligence.”

Our optimistic acceptance of everything they create for us is driven by the tool trope. I’ve heard it countless times, unfortunately even from prominent Christian leaders:

“Technology isn’t good or bad; it’s a tool, it’s just how you use it that matters.”

This deception excuses nearly unconditional adoption of every new tech that comes along, regardless of the harm. Whether ultimately fueled by fear of missing out (FOMO) or desire for pleasure, we:

  • buy devices for ourselves and our kids that we know cause harm
  • play video games that desensitize us to violence and sexualization (often both at the same time)
  • enroll in the school of (anti-)social media where we are all discipled into a highly destructive way of seeing the world and ourselves.

The tool trope is what gives Big Tech the power to continue to grow, even though their products are obviously and intentionally designed to be addictive and are leading many of us to places we regret.

Under the delusion that we have power over technology and can bend it to our will, we think we’re immune to negative effects. So we accept and even advocate for all the new shininess.

Like LLM AIs such as ChatGPT. We blindly trust them to summarize our meetings, podcasts, sermons, and use them as writing buddies. We “chat” with them, where the chat interface feels just like chatting with a human (by design), and we’re numbed into thinking that chatbots are thinking.

But the reason they seem intelligent is not because AI’s abilities are growing, but human abilities are shrinking.

We’re deceived by the alluring magic of tech like AI into allowing them to dominate us, and ultimately, dehumanize us.

Deception

We’ve heard that LLM AIs like ChatGPT can “hallucinate.” Examples of confidently wrong proclamations and broken reasoning abound.

But even the term “hallucination” is industry propaganda. It comes with the notion that hallucinations are rare, that accuracy is always improving, so we’ll eventually be able to trust AI chatbots to usually tell us the truth.

This leads to the huge deception. Because LLM AIs never understand a single word, image, video, or line of code. They can’t know whether anything they say is right, true, or good.

What’s a better way to describe what LLM AIs do? A recent Scientific American article describes AI’s behavior as the profanity “BS.” It turns out that BS is a technical, philosophical term that means speaking with no regard for truth. Not like the truth-teller or the liar, who both know what’s true.

And BS precisely describes every word generated by LLM AIs.

But with regular use, we build a trusting relationship, believing the propaganda that AI chatbots are mostly right. We start turning to them more and more because we’re in a hurry, and we are told we’re being more “efficient” as we offload our ability to reason to the most powerful corporations in the world.

As a result, we’re losing our ability to know what’s true.

Our essential human discernment is “amputated” (as Marshall McLuhan famously said) by our reliance on a technology that only knows how to BS.

In a world where most people use social media, video games, and streaming video 8+ hours a day, where constant distraction has become true addiction, and where our attention span is less than a goldfish, we are ripe to be deceived not only by AI-generated content, but by the value of AI itself.

As a Christian, I’m watching this while always mindful that “We know that we are of God, and that the whole world lies in the power of the evil one” (1 John 5:19). As I discuss in Chapter 4 of my book, Jesus calls the evil one “a liar and the father of lies” (John 8:44). That means every lie, even the deceptions of an AI chatbot can be traced through the DNA of deception to the father of lies.

Deception surrounds the design of AI chatbots, from its name “Intelligence”, to its alluring human imitation, to the Big Tech oligarchs who force it upon us with propaganda. All the while, the tools BS us into dependence.

So the tool trope fails under the weight of deception. We can’t make an essentially deceptive “tool” into something good.

Domination

The most powerful corporations in the world are deploying AIs into everything. Their strategy is conquest. They aren’t forcing on us for our good, but because they know that the first to be widely accepted wins. It’s a calculated business risk backed by billions of dollars and the best propaganda those dollars can buy.

The techno-optimistic worldview of our conquerors is not exactly what you might want from your new AI overlords. They have such an optimistic view of AI because they have such a low, minimalistic view of humans.

Sam Altman, celebrity CEO of OpenAI (ChatGPT), blogged a while back about the inevitability of “the merge,” where humans and AI evolve into a single new entity. Following the steps of Ray Kurtzweil into the fantasy of a “singularity” where computers become “generally” intelligent, they believe that our consciousness will be able to be uploaded somewhere for a form of eternal life.

Altman’s worldview leads him to this conclusion:

More important than that, unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen. It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.

Altman makes it sound like our lack of creativity and pride keep us from thinking that we’ll build machines that are actually intelligent. He continues:

Altman promises that:

Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.

Altman uses tech-speak that may be unfamiliar. A “bootloader” is software that starts the process of booting up a computer. So he’s saying that humankind is like a bootup script for “digital intelligence.” And that future superintelligence will be our evolutionary descendant — like in a Darwinian sense — and it will be better than humans.

“Fade into an evolutionary tree branch” is a jargon for the idea that like old test code that is irrelevant now, humans may just die off as our AI “descendants” thrive via Darwinian survival of the fittest. A “successful merge” would be to somehow blend the biological with the digital so as to become a new species.

This is who is dominating us. This is who we are trusting with the future of humanity. Someone so blinded by reductive views of humanity that he actually is planning for a day when his “child” (ChatGPT Infinity?) becomes sentient and takes over.

The tool trope doesn’t overcome this worldview. We can’t decide to use a tool for good that is designed by the oligarchs to replace humanity.

“It’s a tool; it’s just how you use it?” No, this tool is using us. We’re being dominated.

Dehumanization

The conquerors dehumanize us because their worldview eliminates any characteristic that is not computable. They deny consciousness, free will, and rationality because their materialism can’t account for these human traits.

They believe that with enough computing power, human traits will “emerge” from AI as well. And Altman says, “it’s probably going to happen sooner than most people think.”

But as Dr. Robert Marks says in his landmark book Non-Computable You, we know that there are many human traits that can’t be reduced to an algorithm. No matter how many server farms they plant around the world, how many chips they wire together, or how much power they drain from our grids (and further poison the environment as a result), they will never create a conscious entity.

However, after we’re deceived and dominated (per above), we may be tricked into thinking AI is conscious. Some people already think AI chatbots are sentient. They feel alive, so they must be, since we’ve all been discipled by Big Tech’s products into believing that we should trust our feelings over everything else.

The tragic irony is that as we elevate our creation over us, we are dehumanized, losing our uniquely human traits by outsourcing them to a machine. Since we can’t discern reality anymore, and since we can’t focus long enough to write something meaningful, we become so dependent on AI that it seems to become smarter than us.

It’s a perverse twisting of John the Baptist: “He must become greater, I must become less” (John 3:30). Will AI become greater by dehumanizing us?

The tool trope leads us to dehumanize ourselves as we surrender our ability to think for ourselves to machines. Our human characteristics like creativity and insight atrophy while we comfort ourselves with the fallacy that we’re using the tool for good.

The Power of Words

One more thing. Good human writing and editing focuses on precisely chosen words to accomplish a purpose by conveying a desired meaning. As Mark Twain said:

“The difference between the almost right word and the right word is really a large matter. ’Tis the difference between the lightning bug and the lightning.”

God’s words are the most powerful of all. In fact, the universe was created with words: “And God said …” Jesus our Lord is called “the word of God.” Words are powerful, and highly valued by God.

LLM AIs generate words without understanding their meaning. They are therefore, by definition, meaningless. Imprecise and sometimes completely wrong words are chosen. The algorithmic spewing forth of words devalues all words.

When we devalue words, we devalue God, ourselves, and all of creation.

Throwing out the Tool Trope

In Understanding Media: The Extensions of Man, Marshall McLuhan trounced the tool trope back in 1964. He excoriates General David Sarnoff for his use of the tool trope by saying he’s sleepwalking into a deception.

“Firearms are in themselves neither good nor bad; it is the way they are used that determines their value.” That is, if the slugs reach the right people firearms are good. If the TV tube fires the right ammunition at the right people it is good. I am not being perverse. There is simply nothing in the Sarnoff statement that will bear scrutiny, for it ignores the nature of the medium, of any and all media, in the true Narcissus style of one hypnotized by the amputation and extension of his own being in a new technical form (p. 11–12).

He has strong words for those who adopt the tool trope:

Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.

Could AI be making all of us into “technological idiots?” The content of the medium (e.g. the magical responses of a chatbot) distracts our minds as we’re numbed while our ability to think for ourselves is amputated. To me, the implication is clear.

Neil Postman agrees with similarly strong words. In Amusing Ourselves to Death, he says:

But it is much later in the game now, and ignorance of the score is inexcusable. To be unaware that a technology comes equipped with a program for social change, to maintain that technology is neutral, to make the assumption that technology is always a friend to culture is, at this late hour, stupidity plain and simple” (p. 157).

Takeaway

We’ve got to stop using the tool trope to justify our adoption of the latest thing Big Tech oligarchs have decided must dominate our lives. We need to reject hype-filled pronouncements of benefits or fear-laced predictions of missing out.

Instead: wait, watch, evaluate. Don’t buy the inevitability thesis: that AI or any other deceiving, dominating, or dehumanizing tech is here to stay, so you’d better get on board or be left behind.

Your distinctly human traits are too valuable to throw away for convenience, efficiency, or profit. As Jesus asks in Mark 8:36 (NIV): “What good is it for someone to gain the whole world, yet forfeit their soul?”

Indeed.


Doug Smith

Doug is a passionate voice in the epic battle against screen addictions, especially through his award-winning book, [Un]Intentional: How Screens Secretly Shape Your Desires, and How You Can Break Free (https://unintentionalbook.com). Doug loves to help individuals and families break free from screen addiction so they can live out their God-given purpose. Doug and his wife Lyneta are happy empty nesters and are blessed with four grown daughters.

AI and the Destructive Lies of the Tool Trope