Mind Matters Natural and Artificial Intelligence News and Analysis
reaching toward chatbot
Chat bot Robot Online Chatting Communication Business Internet Technology Concept
Image licensed via Adobe Stock

Why We Should Not Trust Chatbots As Sources of Information

A linguist and an information theorist say that chatbots lack any awareness of the information they provide — and that matters
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Emily M. Bender

Linguist Emily M. Bender and information theorist Chirag Shah, both of the University of Washington, have a message for those who think that the chatbot they are talking to is morphing into a real person:

No.

Not only that but there are good reasons to be very cautious about trusting chatbots as sources of information, all the more so because they sound so natural and friendly. First, decades of science fiction, the authors point out, have taught us to expect computer scientists to develop a machine like that:

However, we must not mistake a convenient plot device — a means to ensure that characters always have the information the writer needs them to have — for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.

Emily M. Bender and Chirag Shah, “All-knowing machines are a fantasy” at IAInews (December 13, 2022). A subscription is required to read the whole article.

The first of the two reasons for considerable caution that Bender and Shah identify is that the machine is designed to create text that sounds like it makes sense, by assimilating huge amounts of training data and modeling the ways that words appear with each other in certain configurations within that mass. But they warn,

The result is systems that can produce text that is very compelling when we as humans make sense of it. But the systems do not have any understanding of what they are producing, any communicative intent, any model of the world, or any ability to be accountable for the truth of what they are saying.

Emily M. Bender and Chirag Shah, “All-knowing machines are a fantasy” at IAInews (December 13, 2022).

In other words, there is nobody in there. The information was originally intelligently created. But the extracts the chatbot grabs from vast troves and arranges according to rules of syntax are not the result of an intelligent process. No one is in charge of it — unless, of course, there is a human in the loop, listening in. But then the session with the chatbot is no longer an interaction with a supercomputer alone.

Chirag Shah

In one open-access study, Bender and colleagues called the chatbots “stochastic parrots” on that account.

Second, on a deeper and more philosophical note, the pursuit of Absolutely Certain Correct Information suffers from a fundamental flaw:

There will never be an all-inclusive fully correct set of information that represents everything we could need to know. And even if you might hope that could come to pass, it should be very clear that today’s World Wide Web isn’t it. When people seek information, we might think we have a question and we are looking for the answer, but more often than not, we benefit more from engaging in sense-making: refining our question, looking at possible answers, understanding the sources those answers come from and what perspectives they represent, etc.

Emily M. Bender and Chirag Shah, “All-knowing machines are a fantasy” at IAInews (December 13, 2022).

Of course, the authors note, some questions can be answered simply and definitively: What’s the formal species name of the Canada goose?Branta canadensis. But then that isn’t the sort of question anyone need puzzle over for long.

The impossibility of finding fully Correct information in answer to the questions that we do puzzle over scotches the many efforts we hear about to combat misinformation, disinformation, and malinformation on the internet as well.

To the Centers for Disease Control, for example, information about myocarditis as an outcome of the vaccine against COVID-19 was long treated as “misinformation.” But the agency now admits that the concerns are legitimate. Thus, if the information seeker is in a risk category for myocarditis, even government agencies may not have been providing a picture that allows for risk assessment. We must often take the risk of searching in a variety of places for answers that meet a need.

In their open-access paper, Situating Search, Bender and Shah stress that “there is tremendous value in information seekers exploring, stumbling, and learning through the process of querying and discovery through these systems,” rather than having an apparent voice of authority tell us the assured truths emanating from a given authority. And, in their post at IAInews, they warn, “It is urgent that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, informational value, or trustworthiness”

Life was never that simple and never will be.

Meanwhile, here’s a reflection on computers “taking over” from Science Uprising:

You may also wish to read: Did the GPT3 chatbot pass the Lovelace Test? Robert J. Marks: The Lovelace Test determines whether the computer can think creatively. We found out… GPT-3 was not trained to look at meaning. It does not understand its training data. Otherwise it would have learned from the instructional material it saw. (Robert J. Marks)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why We Should Not Trust Chatbots As Sources of Information