Mind Matters Natural and Artificial Intelligence News and Analysis
caution-warning-sign-indicates-a-potential-error-danger-in-the-digital-technology-system-symbol-exclamation-system-failure-or-trouble-notice-important-website-maintenance-and-available-on-internet-stockpack-adobe-stock
Caution warning sign indicates a potential error danger in the digital technology system. Symbol exclamation, system failure or trouble. Notice important website maintenance and available on internet

Can AI Help Stem the Tide of Fake Science Papers?

One problem is that science journals don’t do a very good job of establishing author identities. Chatbots are bound to make things worse
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Fake studies are extracting a toll on science publishing. This from the Wall Street Journal on Tuesday:

Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday will announce that it is closing 19 journals, some of which were infected by large-scale research fraud.

Emil Lendof, “Flood of Fake Science Forces Multiple Journal Closures,” Wall Street Journal, May 14, 2024
Call for papers (Cfp, science)

Wiley had retracted over 11,300 apparently compromised papers and had already closed four journals. Other publishers have retracted hundreds of papers.

The problem is traced to “paper mills,” which, for a price, will fabricate a paper in whole or in part for a scientist. As Lex Bouter, Professor of Methodology and Integrity at Vrije Universiteit Amsterdam, explains,

Paper mills rely on the desperation of researchers — often young, often overworked, often on the peripheries of academia struggling to overcome the high obstacles to entry — to fuel their business model.

They are frighteningly successful. The website of one such company based in Latvia advertises the publication of more than 12,650 articles since its launch in 2012. In an analysis of just two journals jointly conducted by the Committee on Publications Ethics and the International Association of Scientific, Technical and Medical Publishers, more than half of the 3440 article submissions over a two-year period were found to be fake.

Lex Bouter, “Fake academic papers are on the rise: why they’re a danger and how to stop them,” The Conversation, March 6, 2024

Wiley had some unusual bad luck with fake papers

Wiley in particular, which publishes over 2000 journals, had the misfortune to acquire Hindawi, a smaller, Egypt-based house that published 250 journals. Hindawi got badly infected, as science consultant Elisabeth Bik reported last August at Science Integrity Digest:

Hindawi retracted its first paper in 2009. From 2009 to 2019, Hindawi retracted an average of 26 papers per year. But in 2022, the publisher published 351 retractions, and in 2023 that number skyrocketed to 3,936 – and it’s only August!

Elisabeth Bik, “Hindawi’s mass retraction of “Special Issues” papers,” Science Integrity Digest, August 10, 2023

Bik notes that special issues are a red flag for fake papers because “They are not managed by the regular editors of these journals but by guest editors, who might be inexperienced and naive, but eager to pad their resume and fill up the issues with insignificant contributions from friends, paper mills, and AI chatbots.” She also offers a hands-on analysis of what makes some papers look suspicious.”

Apart from editorial handwringing, what can be done?

One fixable problem, identified by particle physicist Ari Joury at Builtin, is that science journals don’t do a very good job of establishing author identities: “Nothing formal is currently in place, and that’s a shame. In a day and age where every bank can verify your ID online and match it with the face on your webcam, science can’t even protect its most valuable contributors from scammers.”

Another problem is the assist given to fraud by new AI tools like chatbots. One team decided to test that:

A recent study investigated the use of AI to generate high-quality fraudulent medical articles. The authors used ChatGPT to generate a fraudulent article about neurosurgery. Here are some of the prompts they used to create an article in under an hour…

The authors then used AI detection software to check their article. They used the AI detector software “Content at Scale,” which states it has a 98% accuracy rate of telling whether the text is human or AI-generated. It rated the probability of AI content of this article at 48%, far from a convincing result. Another software tool, AI Text Classifier by Open AI, rated the AI generation of the article as “unclear.”

Susan Goldhaber, MPH, “Scientific Fraud In The Age Of AI,” American Council on Science and Health, March 12, 2024

Goldhaber makes a case for independent audits of papers that are federally funded.

Can AI generally help detect fake papers?

To some extent, yes, and the results can be startling, as Carly Cassella reports for Science Alert:

Using artificial intelligence, researchers trained a computer to look for several red flags commonly seen in fake papers submitted to scientific journals.

When the tool could pick out red flags with 90 percent accuracy, it was used to comb through roughly 5,000 neuroscience and medical papers published in 2020.

The tool marked 28 percent as probably made-up or plagiarized.

Carly Cassella “A Shocking Amount of Neuroscience Papers May Be Faked – And That’s Before ChatGPT,” ScienceAlert, 27 May 2023

A second, hands-on check was, of course, required — and the second check found that two-thirds of the computer-flagged papers were indeed fake. So the computer can point out a basis for investigation but it can’t do the whole job.

AI is not going to save researchers from a partly AI-driven dystopia. Stay tuned.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can AI Help Stem the Tide of Fake Science Papers?