Mind Matters Natural and Artificial Intelligence News and Analysis
hand-touching-digital-chatbot-for-provide-access-to-information-and-data-in-online-network-robot-application-and-global-connection-ai-artificial-intelligence-innovation-and-technology-stockpack-adobe-stock
Hand touching digital chatbot for provide access to information and data in online network, robot application and global connection, AI, Artificial intelligence, innovation and technology.

When should writers who use chatbots, gen AI, etc., give credit?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This topic came up at a Wired advice column written by Reece Rogers:

“If I use an AI tool for research or to help me create something, should I cite it in my completed work as a source? How do you properly give attribution to AI tools when you use them?” —Citation Seeker

“Generative AI Is My Research and Writing Partner. Should I Disclose It?,” December 11, 2024

Good question The natural instinct of a non-gen AI-using writer is: For heaven’s sakes, source whatever you use. If someone else has to do that for you, it could be under embarrassing circumstances. Why take a chance?

Of course, complexities arise. Roberts distinguishes between using a chatbot for research and using it for composition of, say, an article. The latter is more problematic:

Let’s say you decide to use a chatbot to sketch out a first draft, or have it come up with writing/images/audio/video to blend with yours. In this case, I think erring on the side of disclosure is smart. Even the Dominos cheese sticks in the Uber Eats app now include a disclaimer that the food description was generated by AI and may list inaccurate ingredients.

Every time you use AI for creation, and in some cases for research, you should be honing in on the second question. Essentially, ask yourself if the reader or viewer would feel tricked by learning later on that portions of what they experienced were generated by AI. If so, you totally should use proper attribution by explaining how you used the tool, out of respect for your audience. Not only would generating parts of this column without disclosure go against WIRED’s policy, it would also just be a dry and unfun experience for the both of us. “Disclose It?

Wired’s own policy is, in part,

We do not publish stories with text generated by AI, except when the fact that it’s AI-generated is the whole point of the story. (In such cases we’ll disclose the use and flag any errors.) This applies not just to whole stories but also to snippets—for example, ordering up a few sentences of boilerplate on how Crispr works or what quantum computing is. It also applies to editorial text on other platforms, such as email newsletters. (If we use it for non-editorial purposes like marketing emails, which are already automated, we will disclose that.) …

We do not publish text edited by AI either. While using AI to, say, shrink an existing 1,200-word story to 900 words might seem less problematic than writing a story from scratch, we think it still has pitfalls. Aside from the risk that the AI tool will introduce factual errors or changes in meaning, editing is also a matter of judgment about what is most relevant, original, or entertaining about the piece. This judgment depends on understanding both the subject and the readership, neither of which AI can do.

It’s refreshing to learn that a tech mag understands the risks when in 2023 venerable Sports Illustrated got caught using AI-generated bot news, complete with fake writer profiles.

There is no way to reliably automate good, creative newswriting any more than one can automate authentic relationships.


When should writers who use chatbots, gen AI, etc., give credit?