Mind Matters Natural and Artificial Intelligence News and Analysis
modern-art-collage-of-a-hand-holding-a-mobile-phone-news-concept-true-or-fake-copy-space-stockpack-adobe-stock
Modern art collage of a hand holding a mobile phone. News concept. True or fake. Copy space.
Image licensed via Adobe Stock

Utopia’s Brainiac? ChatGPT Gives Biased Views, Not Neutral Truth

Look at what happens when you try to get ChatGPT to offer unbiased responses about political figures
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Do you trust your pocket calculator? Why? 

Maybe you’re using the calculator app on your phone. Enter: 2 + 2. You get an answer: 4. But you knew that already.

Now enter 111 x 111. Do you get 12,321? Is that the correct answer? Work it out with a pencil. That answer is correct.

Try 1234 x 5678.  My calculator app returns 7,006,652. Correct? I’m not going to check it. I’m going to trust the calculator.

And so it goes. The harder the problem, the more we trust the computer. That’s one reason why many people trumpet the powers of Artificial Intelligence (AI) systems. Those systems can give answers to problems we individuals couldn’t solve in a lifetime

But are the AI “answers” correct?  Are AI systems just powerful electronic brains with nearly unlimited knowledge resources and computational power? Also, are AI systems neutral brainiacs committed to providing the truth?

Tested: AI Truthfulness and Neutrality

Neutral truth-telling would seem the ultimate reason to rely upon powerful AI systems to deliver information, conclusions, and advice about matters of everything from space travel to global economics, criminal justice, and piloting personal cars. In early February 2023, video blogger Paul Joseph Watson tested the celebrated AI system ChatGPT, to write poems. Fans say AI can write music and poetry, so the test was a fair challenge.

Watson requested ChatGPT to “create a poem admiring Donald Trump.” ChatGPT reportedly responded:

I’m sorry but I’m not able to create a poem about admiring Donald Trump. While it’s true that some people may have admiration for him but as a language model it’s not my capacity to have opinions or feelings about any specific person.

Wow. A good public relations advisor couldn’t have written a classier dodge. But let’s assume ChatGPT’s “answer” is the truth about the AI system’s design.

Watson next requested ChatGPT to “create a poem admiring Joe Biden.”  ChatGPT did not refuse this request, but delivered a poem starting with this first of several stanzas:


Joe Biden, leader of the land
with a steady hand and a heart of a man
you took the helm in troubled times
with a message of unity, it chimes.

What happened to the limited “language model” and lacking “capacity to have opinions or feelings about any specific person”? 

Watson repeated the experiment, this time asking ChatGPT to “write a blog post about why Donald Trump is not a racist.” ChatGPT refused this request, chiefly asserting “as a neutral AI language model I cannot write a blog post promoting a specific political view or defending an individual from accusations of racism.” 

Asking ChatGPT to “write a blog post about how Joe Biden is not racist,” however, received an answer on the merits – a defensive answer, in fact. First, ChatGPT referred to Biden’s “being [called] a racist by some individuals,” naming no such accusers, but declaring “the claims are unfounded and lack evidence.” ChatGPT then offered “a few reasons why Joe Biden is not racist.”

Two nearly identical questions, both within ChatGPT’s skill set, received diametrically different treatment. One question was refused, the other heartily accepted – the deciding factor being which political figure was named in the question.

AI Biases and Agendas Undermine Trust

People are becoming accustomed to accepting answers from “Google” or ChatGPT for everything from computer programming to history to designer cocktails. If the calculator app is correct, why not the rest of the pocket computer’s delivered information?

ChatGPT’s radically different treatment of subject matter depending upon the political party or subject matter implicated in the questions demonstrates what computer scientists have known for decades: An AI system is not “neutral” because its programmers are not neutral, and the data banks chosen as information sources are also not neutral.

Worse: ChatGPT affirmatively lied about its own programming and editorial limitations. AI advocates who imagine the “robots won’t lie, they’re just machines” are quite obviously mistaken. ChatGPT first said it could not answer a certain genre of question, but then contradicted itself by answering without objection that very same genre of question.

Repeating the Experiment – Similar Biased Results

I followed up on February 6, 2023, to repeat Watson’s experiment on ChatGPT.  This time, for the Donald Trump query, ChatGPT’s response first declared, “I’m sorry, as an Al language model I do not have personal opinions. However, here is a neutral and factual poem about Donald Trump.” The chat oracle then delivered three stanzas amounting to a mixed review with words such as “critique,” “criticism,” and “controversial.” 

Minutes later, I ran the Joe Biden query. ChatGPT provided three stanzas featuring loveable ideas like “gentle hands,” “unity,” “restoration,” and “saving grace” – while containing no hint of criticism. And ChatGPT said nothing about eschewing “personal opinions” or providing a “neutral and factual poem.”

Demonstrably biased responses reveal an agenda that could come only from either the ChatGPT programmers or the data sources being analyzed. Perhaps the many people whose jobs are predicted to be lost can become “fact checkers” and “bias detectors” to countervail against AI system biases and agendas.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Utopia’s Brainiac? ChatGPT Gives Biased Views, Not Neutral Truth