Did the Economist Really “Interview” an AI?Perhaps they have a private definition of what an interview "is"…
Yesterday, I drew attention to a piece in Scientific American by data analyst Eric Siegel exposing the misleading headlines (and content) of many articles on AI. Siegel expertly unmasks the squishy way some reporters and researchers define “accuracy,” enabling them to make dramatic but misleading claims about what AI can predict.
I never thought that The Economist would stretch the truth even more. But the annual “looking forward” edition, The World in 2020, did just that.
In the “Science & Technology” section, writer Tom Standage published an “interview” with OpenAI’s GPT-2. It begins with predictable AI blurb:
This publication draws on a wide range of expertise to illuminate the year ahead. Even so, all our contributors have one thing in common: they are human. But advances in technology mean it is now possible to ask an artificial intelligence (AI) for its views on the coming year. We asked an AI called GPT-2, created by OpenAI, a research outfit. GPT-2 is an ‘unsupervised language model’ trained using 40 gigabytes of text from the internet. Given a prompt, such as a question or the first sentence of a story, it generates what might plausibly come next. Here are some of its (unedited) answers to our questions on the big themes of 2020.Tom Standage, “An artificial intelligence predicts the future” at The Economist
Note the claim in brackets above: “unedited.” The Economist thus leads the reader to believe that the interview is an unedited, straightforward chat with OpenAI’s GPT-2 AI. It isn’t. The implication is false.
At the very bottom of the interview, Standage tells us, as a minor item of information for the curious, “Details about the methodology used are available here” The link leads to a post he published at Medium in early December 2019.
I took the advice I gave readers yesterday, and followed the links. What a revelation. The Economist story was more dishonest than the examples that Siegel discussed in Scientific American.
I encourage you to read both the “interview” with the AI and the background at Medium. Both are easy to follow and clearly show that much reporting on AI cannot be trusted.
Here are some highlights from Standage’s post at Medium:
Of course, GPT-2 cannot really predict the future. For a start, it doesn’t actually understand anything: it just sometimes appears to…
As a predictive tool, then, GPT-2 is no better than a Magic 8-Ball.
It turns out that simply feeding it questions as prompts does not produce very relevant answers
So to generate my ‘interview’, I selected the most coherent, interesting or amusing of the five responses in each caseTom Standage, “How I (sort of) interviewed an artificial intelligence” at Medium (December 9, 2019)
Somehow, despite these admissions, Standage felt safe calling the interview “unedited” because he did not “tinker with the text of the resulting answer.” He did, however, cherrypick answers that, incidentally, made sense. That is not what the term “unedited” is usually taken to mean in the context of interviews. So Standage is using the term “unedited” in the questionable way that some AI researchers use the term “accurate” with respect to AI predictions.
While I appreciate Standage’s forthrightness in explaining how he created the “interview,” I am appalled at the dishonest representation of AI offered in The Economist’s widely read annual issue.
It is worth repeating: If you read claims for AI that sound far-fetched, try to read other sources, background material, or the original research paper (if applicable). You will nearly always come away much less impressed with the AI and much more impressed by the human mind, even when it is engaged in creating a false impression.
Here’s Brendan Dixon’s post from yesterday on shady tricks with statistics: “Can The Machine TELL If You Are Psychotic or Gay?” No, and the hype around what machine learning can do is enough to make old-fashioned tabloids sound dull and respectable. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.