Mind Matters Natural and Artificial Intelligence News and Analysis
chat-bot-concept-stockpack-adobe-stock.jpg
Chat bot concept

#6 A Conversation Bot Is Cool —If You Really Lower Your Standards

A system that supposedly generates conversation—but have you noticed what it says?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Our Walter Bradley Center director Robert J. Marks has been interviewing fellow computer nerds (our Brain Trust) Jonathan Bartlett and Eric Holloway about 12 overhyped AI concepts of the year. From AI Dirty Dozen 2020 Part II. Now here’s #6. A means of generating copy from AI: “GPT-3 Is “Mindblowing” If You Don’t Question It Too Closely So what about the bot that replaces conversation?

Our story begins at “09:03. Here’s a partial transcript. Show Notes and Additional Resources follow, along with a link to the complete transcript.

Robert J. Marks: GPT-3. Those are four alphanumeric letters that rhyme. GPT-3. And, there was a headline that says there’s a subreddit populated entirely by AI personifications of other subreddits. First of all, what’s a Reddit for those of us who are not socially media savvy?

Jonathan Bartlett (pictured): Reddit is a site where people post links and comment. But it develops a social gathering type of feel. And, so basically there was some posters who were posting within some of these subreddits, these subcategories, and it took a while before anyone noticed that these were actually bots that were posting.

Note: Bots were posting? Well, MIT’s Technology Review is a pretty reliable source and here’s what they say: “A GPT-3 bot posted comments on Reddit for a week and no one noticed ” There’s a piece on, literally, “How to prevent bots from spamming your sign up forms.” Also: “The Social Impact Of Bad Bots And What To Do About Them.” Meanwhile, back to our story.

Robert J. Marks: Wasn’t this, when it came out, the developer said this might be too dangerous to release because of all the fake headlines that it would generate?

Jonathan Bartlett: They’ve made lots of different claims about GPT-3. And, it’s impressive as a demo. It really does do some impressive text generation. In fact, I think someone actually built a code generation system based off of it. So, you could describe in plain words what you wanted the code to do, and it would actually generate a functional code to do what you asked it to do. So, it’s actually got quite a bit of wow sizzle to it, but it turns out that it’s not… Once you try to get it to do anything serious, it loses it’s luster.

Robert J. Marks: Yeah. GPT-3 was trained with billions and billions of articles, including all of Wikipedia and a bunch more. And, I think one of the big claims from GPT-2 to GPT-3 was this great, massive increase in the amount of training data that it did. And, you could just take a few words and prompt it and, boop, it generates a paragraph corresponding to those words. And, a review in Wired said GPT-3 was provoking chills across Silicon Valley. But, like you said, it was one of these real quick things where you didn’t get into too much of depth. And, I think it was you in your article that you wrote for Mind Matters News said, it’s very impressive if you don’t look too closely. Is that right?

Jonathan Bartlett: Exactly. It’s one of those things where when people see some of these results, I think people start expecting things that they really shouldn’t be expecting from these sorts of systems. For example, one thing that was really impressive is that this is a text processing engine, but it turns out that it can do math. It can do basic arithmetic. But, it turns out that once you get past three digits, it doesn’t do basic arithmetic at all.

Yeah. So, if you asked what’s the number before a hundred, it would tell you it’s 99. If you ask it what the number before 100,000 is, it would say 99,009, which is not the number before 100,000 …

But, at the end of the day, usually AIs wind up saying something that’s completely nonsensical. One of the things that GPT-3 does, somebody was poking at it a bit, and if you ask it basic questions about the United States, it could tell you who is the President of the United States in different times. But, you could also ask it, “Who is the President of the United States in 1600,” and it would give you an answer not recognizing that the United States didn’t exist in 1600. And, you could ask it, “How many eyes does a blade of grass have?” And, it would give you an answer whether a blade of grass has one or two eyes.

Note: Many people noticed the claim that AI could replace writers—until they saw the results: Would-be celeb Lexi Flores was wearing a “champagne flute?” Hardly. For fun, look at Scott Aaronson’s takedown of a similar pretense organized for a popular chatbot, Eugene Goostman:

Eugene: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Scott: Do you think Alan Turing, brilliant though he was, had trouble imagining that the judges of his “imitation game” wouldn’t think to ask commonsense questions like the ones above—or that, if they did, they’d actually accept evasion or irrelevant banter as answers?

Spoiler: It did not turn out well for the bot’s creators.


Well, here’s the rest of the countdown to date. Read it and whistle:

7 AI Can Create Great New Video Games All by Itself! In our 2020 “Dirty Dozen” AI myths: It’s actually just remixing previous games. Eric Holloway describes it as like a bad dream of PACMan. Well, see if it is fun.

8 in our AI Hype Countdown: AI is better than doctors! Sick of paying for health care insurance? Guess what? AI is better ! Or maybe, wait… Only 2 of the 81 studies favoring AI used randomized trials. Non-randomized trials mean that researchers might choose data that make their algorithm work.

9: Erica the Robot stars in a film. But really, does she? This is just going to be a fancier Muppets movie, Eric Holloway predicts, with a bit more electronics. Often, making the robot sound like a real person is just an underpaid engineer in the back, running the algorithm a couple of times on new data sets. Also: Jonathan Bartlett wrote in to comment “Erica, robot film star, is pretty typical modern-day puppeteering — fun, for sure, but not a big breaththrough.

10: Big AI claims fail to work outside lab. A recent article in Scientific American makes clear that grand claims are often not followed up with great achievements. This problem in artificial intelligence research goes back to the 1950s and is based on refusal to grapple with built-in fundamental limits.

11: A lot of AI is as transparent as your fridge A great deal of high tech today is owned by corporations. Lack of transparency means that people trained in computer science are often not in a position to evaluate what the technology is and isn’t doing.

12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.

We’ve talked about this before, of course.

Show Notes

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

#6 A Conversation Bot Is Cool —If You Really Lower Your Standards