Mind Matters News and Analysis on Natural and Artificial Intelligence
aromal-m-s-1085488-unsplash
Newspaper clippings used in umbrella design
Photo by Aromal M S on Unsplash

Who’s Afraid of AI that Can Write the News?

AI now automates formula news in business and sports. How far can it go?

Some researchers have recently claimed that they are frightened to release a new AI writing tool they have developed:

One concern they have is that the technology would be used to turbo-charge fake news operations. The Guardian published a fake news article written by the software along with its coverage of the research. The article is readable and contains fake quotes that are on topic and realistic. The grammar is better than a lot what you’d see from fake news content mills. And according to The Guardian’s Alex Hern, it only took 15 seconds for the bot to write the article. Rhett Jones, “Elon Musk-backed AI Company Claims It Made a Text Generator That’s Too Dangerous to Release” at Gizmodo

The researchers work for the non-profit AI research group OpenAI, backed by entrepreneur-inventor Elon Musk. They have released only a small model to work with and a paper, citing fear of how “bad actors” might use a bigger model.

Their program works by developing templates from reams of generic articles online, adapting their style and content. Indeed, the sample article about an evolutionary biologist who finds Ovid’s unicorn sounded like it could easily fill slots at many pop science tabloids. The program can be used for translation and summarization but also, of course, for sophisticated impersonation and harassment.

Computer-generated copy, Dr. Marks notes, is old news among techies: “Using sites like SCIgen, lots of computer-generated fake papers have been published by IEEE and Springer.”

It was news—back in 2005 when three students introduced the technology:

At the time the stunt may have seemed like nothing more than a silly “gotcha” moment in the tradition of the “Sokal affair,” in which an NYU physicist wrote a nonsense paper that was accepted by a journal of postmodern cultural studies. But SCIgen has actually had a surprisingly substantial impact, with many researchers using it to expose conferences with low submission standards. The team’s antics spurred the world’s largest organization of technical professionals, the Institute of Electrical and Electronics Engineers (IEEE), to pull its sponsorship of WMSCI; in 2013 IEEE and Springer Publishing removed more than 120 papers from their sites after a French researcher’s analysis determined that they were generated via SCIgen. (Just a few weeks ago Springer announced the release of “SciDetect,” an open-source tool that can automatically detect SCIgen papers.) …

But even a decade later, the team’s creation improbably lives on. Stribling says the generator still gets 600,000 annual pageviews that manage to crash their CSAIL research site every few months. The creators continue to get regular emails from computer science students proudly linking to papers they’ve snuck into conferences, as well as notes from researchers urging them to make versions for other disciplines. Adam Conner-Simons, “How three MIT students fooled the world of scientific journals” at MIT News (2015)

A few nonsense papers did not derail serious computer research, of course. However, Dr. Marks cautions, “For engineering, scientific and math type papers, detecting hype is fairly straightforward. For statements about politics, there is a problem. There are those today advocating policy that sure sounds made up. It’s easy to fake something when there is no foundation to rest on.”

The flurry around OpenAI belies the fact that AI has already been used a good deal in the last decade to automate formula news in business and sports:

Here is an excerpt of an American football report by Narrative Science published on the Big Ten Network website this year: ‘Wisconsin jumped out to an early lead and never looked back in a 51-17 win over UNLV on Thursday at Camp Randall Stadium. The Badgers scored 20 points in the first quarter on a Russell Wilson touchdown pass and a James White touchdown run. Wisconsin’s offence dominated the Rebels’ defence …’

Not something to impress the judges of the Pulitzer Prizes for journalism, but it is perfectly readable. Few can tell the difference between this and many other match reports. Furthermore, Narrative Science finished the report within seconds of the final whistle – even before the players had left the field. “Rise of the robot journalist” at South China Morning Post (2012)

The templates for such stories are generally built by journalists who develop fixed overall structures, often repeated. They build in a formula for expected surprise elements like sudden deviations in performance or share price, mimicking an expected writing style. Low-paid journalists in the Philippines have been used to produce copy for American news media, a model that combines easily with AI templates to reduce costs.

Remember, all this was happening back in 2012. If you did not pay much attention to traditional mass media in the intervening years, you probably didn’t notice the mass layoffs or the scandals over sloppy coverage either.

Fast forward to 2019 and from the Toronto Star (via the New York Times), we learn that AI can also be used to undermine newsgathering:

For an earnings report article, for instance, software systems may meet their match in companies that cleverly choose figures in an effort to garner a more favourable portrayal than the numbers warrant. At Bloomberg, reporters and editors try to prepare Cyborg so that it will not be spun by such tactics. Jaclyn Peiser, “The rise of the robot reporter” at Toronto Star

Software developers hope to help reporters spot frauds, fakes, and scams and also to detect Deep Fakes, that is, apparently realistic but untrue photos and videos, which proliferate using similarly sophisticated technology.

So, is it really true that AI can write the news? As a news veteran of forty-five years, I can only consider the question in the light of two background facts:

First, there is nothing new about formula writing. Automating it was as inevitable as automating typing. In the 1940s, Orwell thought formula novels could be written by a machine because fill-in-the-blanks stories require little creativity. If needed, it could be added later by a human writer. There are ongoing efforts to automate formula film-making as well. Background noise music and ”well, it’s-in-a-frame” art should likewise be easy to automate. So why not conventional formula newsgathering?

an international newsstand/ Florian Plag (CC BY 2.0 )

Second, major media no longer play the role they used to play in informing us. That’s mainly because we can get multiple streams of information so much more easily today than fifty years ago. As I wrote a while back, “Would you like to go to the library in a snowstorm to check the spelling of the name of a North African city—only to discover that the library is shut due to weather conditions? Kid, welcome to the 1970s.” Yes, if you are under thirty, please know that it was really like that.

One outcome of the digital revolution is that mass media have tended to become characters in and not narrators of public stories. Think of the recent controversy over the Covington high school boys returning from the March for Life in Washington and a Native American veteran. As Michael Egnor observed here, the internet promotes anonymous mob thinking. But major media outlets, instead of mediating the rage, indulged in it themselves to such an extent that they now face legal action on behalf of a student they exposed to social media fury. For cultural institutions that once prided themselves on getting to the bottom of the real story, they seemed to have as little respect for the fact base, which largely exonerated the students, as social media trolls would.

As Eric Holloway notes, mass media simply no longer play the role they used to play in shaping culture:

The Gosnell case demonstrates just how horrific the practice of abortion is, with infants being born and having their spines snipped with a pair of scissors. The trial itself was famously underreported by the news media, with a highly tweeted photo of the empty media benches in the courtroom. We might expect the same to happen to the movie documenting the trial, the “Gosnell Movie”. And it has faced a lot of opposition from the media on many fronts. First, its original crowdfunding campaign was canceled by the platform Kickstarter, forcing them to use the Indiegogo platform. The campaign was a rousing success, raising more than a million dollars for production. Then, it took over a year to find a producer willing to release the movie in a wide release. Finally, once the movie hit the theatres, there was almost no reporting and barely any critics reviewed the movie.

Despite the negative censorship of the established media outlets, the Gosnell Movie managed to be the #1 indie release in the US and #10 out of all movies released. Once it came to DVD the movie became the #1 best selling DVD on Amazon. Altogether, quite an amazing feat for a movie that is square in the blind spot of establishment journalists. Eric Holloway, “The Gosnell Movie Indicates the Death of Old Media” at Eidos

Despite media blackouts, if you want to find out about  Gosnell: The Trial of America’s Biggest Serial Killer, you are only a click away. Back in the 1970s, that kind of access to information was difficult even for veteran reporters.

The decline of legacy mass media as a unified community news source has been captured in a statistic:

[In 1963] “All of the top thirty-one shows had ratings of at least 20…led by The Beverly Hillbillies with a rating of 34.9, meaning that 34.9 percent of all American homes with a television set were watching it.”

As a point of comparison, the top-rated primetime TV programs of 2018 (This Is Us and Roseanne) scored a 5.4 and 5.0, respectively. John Miltimore, “6 stats that show how much American culture has changed in a half century” at MercatorNet

So, to return to the question, Can robots really write the news? If all you want is performance at the level of the current media’s Covington debacle without all the sound, fury, and litigation, yes.  AI will be faster and cheaper. And that’s all it will be.

Software programs will not have more or better ideas than the people who designed them. As the audience for news, we must decide whether that level of information is all we need to know.

It’s the same with painting. Machine painting could flood the market with “it’s-in-a-frame” art. If art matters to you, you’d be wiser to patronize a local artist who expresses something you think is worth saying. And when it comes to news, it’s more important than ever to search for outlets you have reason to believe you can trust.

See also: If a robot read the news, would you notice a difference?

Could AI write novels?

and

Does AI art spell the end of the artist’s way of life?


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Ottawa, Canada. Specializing in faith and science issues, she has published two books on the topic: Faith@Science and By Design or by Chance? She has written for publications such as The Toronto Star, The Globe & Mail, and Canadian Living. She is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist'€™s Case for the Existence of the Soul. She received her degree in honors English language and literature.

Who’s Afraid of AI that Can Write the News?