Mind Matters Natural and Artificial Intelligence News and Analysis
programmer-working-on-their-professional-development-types-on-a-laptop-computer-keyboard-coding-language-user-interface-on-screens-development-of-software-and-coding-chatgpt-ai-and-webdesign-stockpack-adobe-stock
Programmer working on their professional development types on a laptop computer keyboard. Coding Language User Interface on Screens. Development of software and coding, ChatGPT AI and webdesign
Image Credit: Bartek - Adobe Stock

Why I Respectfully Disagree With Bill Dembski on AI in Education

Are GenAI chatbots really the bridge over the gulf of our education crisis?
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Dr. Bill Dembski recently wrote an article encouraging the use of chatbots like ChatGPT to improve educational outcomes, republished here at Mind Matters News in three parts. I’ve been graciously invited to respond. I will also reply in three parts. This post is the first.

First, I recognize that Dr. Dembski is one of few who would be carved in a Mt. Rushmore of the ID movement. I am a super-fan of him and his work, and love how he has invested his brilliant giftedness into carefully teaching the world how the design inference works. As a long-time fan and supporter of Discovery Institute, I could write 5,000 more words of praise, but that’s not what I’ve been asked to do.

Dr. Dembski wants us to use technology with wisdom. Not to follow transhumanist visions of enhancement, but to edify. I share his broader goal, but not his willingness to leverage today’s generative AI chatbots to get there.

Purpose of education

For quick context: the founders of the USA valued education from the beginning. In 1647, the Massachusetts Bay colony enacted the “Old Deluder Act.” They saw that since, “it being one chief project of that old deluder, Satan, to keep men from the knowledge of the Scriptures,” literacy was vital to keep students from being deceived by false teaching. They were right. And that is still true today.

William Dembski’s first article.

Around the same time, John Milton said the purpose of education is to, “repair the ruins of our first parents by regaining to know God aright, and out of that knowledge to love him, to imitate him, to be like him … by possessing our souls of true virtue.”

And today, we’re arguably failing miserably. Students are entering college without the ability to read. Even without having read a single book straight through.

We are not preparing our next generation to avoid deception by being able to read and think for themselves. And we’re light-years away from Milton’s virtue formation. So improving education is a laudable goal. Dr. Dembski and I are on the same team there.

But are GenAI chatbots the bridge over the gulf of our education crisis?

Blinded by the math

Dr. Dembski’s article is filled with the assumption that AI chatbots like ChatGPT are trustworthy sources of information that students can depend on for learning.

I’ve had conversations with several brilliant math PhDs about AI chatbots. All of them have a hard time seeing the dangers of large language models (LLMs). It’s almost unthinkable to them that the #1 use-case of ChatGPT is therapy and companionship. To math geniuses, it seems like only a tiny minority of foolish people should be susceptible to being drawn in.

I see chatbots through a perspective formed by years of research into Big Tech’s exploitation of our behavioral psychology that I published in my book, [Un]Intentional. Because of that lens, I expected people would form relationships with ChatGPT before the research started proving it. It seemed obvious that it was designed for that one purpose. Just like social media is designed for “engagement” (leading to today’s mental health and loneliness crises), chatbots are designed to foster what the Center for Humane Technology called “the race to intimacy.”

I heard a math professor on a podcast recently talk about a student who was learning how LLMs work. The student’s epiphany was, “it’s just math!” The implication? “Math” can’t be harmful — it is innately useful, so since LLMs are “just math,” they’re useful too.

My study of Marshall McLuhan, Neil Postman, Jaques Ellul, Cal Newport, and many others have given me a different lens that I hope will help scholars like Dr. Dembski see what their great learning might have obscured from their view.

AI chatbots are fundamentally untrustworthy

Dr. Dembski frames his positive vision for using AI chatbots in education like this:

It’s a false dilemma to think that students will either cheat using AI or must be prevented from using it to learn successfully. The third option is to use AI as a way of honing students’ skills and knowledge, helping them learn more effectively than before.

But the assumption that AI can “hone students’ skill and knowledge” ignores fundamental aspects of what AI chatbots are designed to do, and how we are shaped, changed, formed, and ultimately harmed by using them.

First, I must quote fellow Mind Matters News author Professor Gary Smith, who says,

The inescapable dilemma is that if you know the answer, you don’t need to ask an LLM and, if you don’t know the answer, you can’t trust an LLM.

Like they can’t imagine forming an emotional bond with a chatbot, I think that people with a mind to get a PhD in math have a hard time believing that people wouldn’t be able to tell the difference between true and false output of an LLM.

But students are being led astray. Adults are being led astray.

Next: Why do chatbots lead learners astray?


Doug Smith

Doug is a passionate voice in the epic battle against screen addictions, especially through his award-winning book, [Un]Intentional: How Screens Secretly Shape Your Desires, and How You Can Break Free (https://unintentionalbook.com). Doug loves to help individuals and families break free from screen addiction so they can live out their God-given purpose. Doug and his wife Lyneta are happy empty nesters and are blessed with four grown daughters.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Why I Respectfully Disagree With Bill Dembski on AI in Education