As chatbots become more sophisticated, some writers predict that they will take over run-of-the-mill writing jobs that don’t need or attract much creativity. For example, copyright and plagiarism consultant Jonathan Bailey points out, “AI doesn’t have to be great, just good enough.”
In 2017, five years ago, the Washington Post revealed that an AI bot named Heliograph had produced some 850 articles for the paper. However, those stories were for things such as local high school football games and financial reports. They were all short, formulaic and not worth sending a human reporter to.
At the time, the stories had generated more than 500,000 clicks. In short, they were ridiculously successful, delivering information that was important and sought after, but also not requiring human input.
So, even if we agree that current AIs are dumb, they are still adequate for many of the tasks we would normally give to human authors.Jonathan Bailey, “AI and the Danger of Good Enough” at Plagiarism Today (December 19, 2022)
He qualifies that somewhat:
An AI can interpret a box score or an election result, but it won’t be writing feature pieces of athletes or analyzing the impacts of an election. It may try to and even create prose that seems to fulfill that need, but it won’t hold up to much scrutiny.Jonathan Bailey, “AI and the Danger of Good Enough” at Plagiarism Today (December 19, 2022)
But his point is, “Much of the reading we do and images we look at are not intended to be art, analysis or anything deeper than exchanging basic information.”
Years ago, the term “satisficing” was in vogue to describe that situation: No, it’s not very good but it’s good enough: “Satisficing is a decision-making strategy that aims for a satisfactory or adequate result, rather than the optimal solution. Instead of putting maximum exertion toward attaining the ideal outcome, satisficing focuses on pragmatic effort when confronted with tasks. This is because aiming for the optimal solution may necessitate a needless expenditure of time, energy, and resources.” – Investopedia
Some sources seem glad that this is happening:
Even in its current, relatively primitive form ChatGPT portends both huge productivity increases and major disruptions in any enterprise in which writing matters. Instead of writing boilerplate corporate memos, managers will soon assign them to bots. The run-of-the-mill college grads who get paid to flood my mailbox with press releases and promotional emails should start thinking about careers as nail techs or phlebotomists—something in the physical world. Insight and beauty are still rare, but serviceable prose isn’t.
With the right prompts, ChatGPT can already mimic routine political journalism and quotidian high school and college essays. “What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor,” writes Daniel Herman, a humanities teacher at Maybeck High School a small independent school in Berkeley, in The Atlantic.Virginia Postrel, “Routine Writing Is About to Be Free” at Virginia’s Newsletter (Substack) (December 20, 2022)
Writer Virginia Postrel is not telling writers to give up and embrace the machine. Her message rather is this: “While crashing the value of mediocrity, ChatGPT could increase the returns to excellence. (“Average is over,” as Tyler Cowen put it.) ”
In other words, to be a writer today, you need to be unique, creative. If it’s original with you, you can be pretty sure that a bot did not say it first.
You may also wish to read: Large Learning Models are an unfortunate detour in AI. Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing. Michael Black, director at the Max Planck Institute for Intelligent Systems, thinks LLMs are dangerous because they relay nonsense with an air of authority. (Gary Smith)