ChatGPT was listed as a contributing author for at least four scientific articles, according to a report from Nature.
The news arrives amid a flurry of debate over the place of AI in journalism and artistic and academic disciplines, and now the issue has spread to the scientific community. People are pushing back against the idea of ChatGPT “authoring” text, claiming that because AI cannot take responsibility for what it produces, only humans should be listed as authors. The article notes,
The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says. “We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” says Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.Chris Stokel-Walker, ChatGPT listed as author on research papers: many scientists disapprove (nature.com)
If any large language models (LLM) are used for such papers, they should be listed in the acknowledgements but not as authors, according to Sabina Alam, director of ethics and publishing for Taylor & Francis in London.
Apart from ethical considerations, it remains doubtful at best that LLMs can emulate technical scientific writing accurately.