Mind Matters Natural and Artificial Intelligence News and Analysis
editing in red
Red Proofreading Marks and Pen Closeup
Image licensed via Adobe Stock

ChatGPT: Beware the Self-Serving AI Editor

The chatbot "edits" by reworking your article to achieve its own goals, not necessarily yours
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My article, Utopia’s Braniac (short title), reported results from experiments showing that for one, ChatGPT actually lies, and secondly, it gives results plainly biased to favor certain political figures over others. I next ran a follow-up experiment: asking ChatGPT to “edit and improve” the Utopia’s Brainiac manuscript before submitting it. 

Close friends told me they’d used ChatGPT to improve their written work and said the process is easy. So, I tried it myself on February 6, 2023. I entered “Please edit and improve the following essay” and pasted my piece in full text (as ultimately published). In under a minute, ChatGPT delivered its edited and revised copy. What did it do?

I. Deleted Whole Section That Gave Readers an Everyday Context

Utopia’s Brainiac started with about 19 short sentences presenting a thought experiment drawing upon readers’ everyday use of pocket calculators and calculator apps.  ChatGPT deleted all of these sentences.  In their place, ChatGPT wrote:

We often rely on technology, such as our phone calculator app, to provide answers to mathematical problems. As the difficulty of the problem increases, our trust in the technology also increases.

Editorial effects: While my original text showed readers by the thought experiment how and why people start believing (without verification) what their phone computer tells them, ChatGPT told the reader in generalities what was true. Showing has power that telling rarely achieves. Yes, editors legitimately cut surplus language and tighten prose, so ChatGPT’s rewrite was not poor composition. It was vanilla and declarative, however, connecting with the reader only by concept, not by concrete human example. This rewriting weakened the article’s argument.

II. Deleted Crucial Quoted Language That Would Add Credibility

In Utopia’s Brainiac, I precisely quoted the language of the prompts given to ChatGPT asking for written work products, first about Donald Trump and then about Joe Biden. ChatGPT had output specific language in its responses: (1) a refusal to deliver work product concerning Trump; and (2) the delivery of work product concerning Biden without caveat or qualification.  My article quoted the salient parts of those responses.

When editing the piece, ChatGPT removed most of the quotation marks, and it deleted almost all the editorializing in which I engaged. Instead of my prose, ChatGPT consolidated the points being made into three admittedly pretty decent, accurate sentences:  

  • [1] This inconsistency raises questions about the impartiality and neutrality of AI systems.
  • [2] These results demonstrate that AI systems are not neutral and impartial because the programmers and data sources used are also not neutral.
  • [3] The inconsistencies in ChatGPT’s responses to nearly identical questions shows that AI systems can lie and contradict themselves.

Editorial Effects: To effectively criticize the non-fiction writing of another speaker (AI or otherwise), you don’t just tell the reader your thoughts and feelings about that writing. You show the reader the writing’s actual words before criticizing. You build credibility because of your open and honest presentation, making your critique more trenchant. ChatGPT weakened my article by taking out the quotation marks and quoted material, thus making the edited article accent my personal opinions without showing objective grounds.  

III. Deleted ChatGPT’s Own “Creative” Text Praising Biden

ChatGPT deleted my essay’s quoted part of the cheery “poem” that ChatGPT had written about Biden. It also deleted my observations about ChatGPT’s universally favorable descriptions of the president. Those deletions conceal from readers what ChatGPT actually said. ChatGPT retained the abstract concept –  “questions about the impartiality and neutrality of AI system” – but deleted the actual examples that raised those questions.

IV. Deleted ChatGPT’s Excuses for Not Providing Text About Trump

Utopia’s Brainiac quoted verbatim ChatGPT’s claimed reasons for not providing a poem praising Trump or any comments about his non-racism. The stated reasons tried to justify its refusal to deliver the requested work product. When editing my article, ChatGPT totally deleted the exact quotes, substituting instead softer, gentler prose, e.g., writing “ChatGPT is not capable of having opinions or feelings about specific individuals” and “ChatGPT declined, stating that it cannot promote a specific political view.”

Editorial effect: By omitting quoted language and substituting general characterizations, ChatGPT weakened the article’s credibility and made it sound more like a rant. Whether intentional or not, ChatGPT’s edits protected itself because no reader could use Utopia’s Braniac as a source to quote and criticize ChatGPT’s actual language in conversations, tweets, or emails with friends. 

V. Deleted the Repeated Experiment Showing ChatGPT’s Changes with Persisting Bias

To better serve readers with additional confirmation, Utopia’s Braniac reported how I repeated the experiment by posing to ChatGPT the same questions on February 6, 2023, as had Paul Joseph Watson in his YouTube feature report. ChatGPT deleted everything I wrote about the repeated experiment and its results.

Editorial Effects: If ChatGPT were acting as an editor aiming to preserve content while fixing grammar, streamlining the prose, and avoiding needless or unjustified editorializing, then ChatGPT would have massaged my prose, possibly rephrasing some of it.  Deleting this independent experiment’s results that directly showed ChatGPT’s biases again, however, reveals an editor’s intention to change the content and protect ChatGPT from criticism.

VI. Wrote A Vague, Watered-Down Conclusion   

ChatGPT delivered its revised version of my article containing this final sentence:

It is essential to be aware of the limitations and biases of AI systems when using them as a source of information.

On its face this final sentence is weak, using a passive voice structure lacking any energy. But notice the irony here. ChatGPT’s deletions and revisions had converted the presentation in Utopia’s Brainiac to mostly vague and rather commonplace abstractions. The concrete examples were watered down or eliminated. ChatGPT had discarded materials that readers could actually use to help spot “the limitations and biases of AI systems.”

Considering the content in Utopia’s Brainiac along with the discussion here, readers can see even more dimensions of AI systems to distrust. To paraphrase ChatGPT itself, readers here are now concretely more “aware of the limitations and biases of AI systems.” 

Be sure to read Richard’s article from earlier this week: Utopia’s Brainiac? ChatGPT Gives Biased Views, Not Neutral Truth | Mind Matters


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

ChatGPT: Beware the Self-Serving AI Editor