Last week, Jean-François Bonnefon, a behavioral scientist at the French Centre National de la Recherche Scientifique, tweeted that a scientific manuscript he submitted to a journal had been rejected by a bot. The program had flagged his paper for plagiarism, highlighting the methods, references, and authors’ affiliations. “It would have taken 2 [minutes] for a human to realize the bot was acting up,” Bonnefon wrote in one of his tweets. “But there is obviously no human in the loop here.”
In a massive Twitter thread that followed, several other academics noted having similar experiences.Diana Kwon, “Journals’ Plagiarism Detectors May Flag Papers in Error” at The Scientist
In the ensuing discussion, it became obvious that relying on bots alone, without input from experts, has just not worked for science publications.
An underlying issue is that, while some plagiarism involves straightforward misappropriation, other cases are more complex. Consider, for example, the appropriation via rephrasing of a new underlying idea.
Art forgery poses similar problems. AI can detect straightforward forgery — but so can other methods. Historic artworks often have messy histories involving multiple artists. In that case, AI can shed light on the extent of each person’s contribution. But that will not necessarily settle the controversy over who should be credited. In a painting from a Renaissance workshop, should an artist who painted leaves get the same credit as an artist who painted faces? Don’t look to an algorithm for an answer.
Similarly, with plagiarism, a consensus is emerging that the system will always need a human in the loop.
See also: AI is not a simple fix for plagiarism The internet speeded up a perennial problem without changing it