Mind Matters Natural and Artificial Intelligence News and Analysis
thumbs up face down.jpg
woman doing thumbs up positive gesture in shock face, looking skeptical and sarcastic, surprised with open mouth

Can the Machine Know You Are Just Being Sarcastic?

Researchers claim to have come up with an artificial intelligence program that can detect sarcasm on social media platforms
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There’s an old joke about the bored engineering student slouched in the back of the Remedial English Grammar and Composition class. The instructor was lecturing on the use of negatives. In some languages, she explained, negatives can be piled on top of one another without changing the overall negative (“no”) meaning. But in English, adding two negatives together creates a positive.

For example: “I am never going there again.” means just what it says (“never”).

but

“I am not ‘never going there again’” means that maybe you are going there again (“yes, if some changes are made”). The first negative negates the second.

The teacher went on to say, “But there is no language in the world in which two positives make a negative.”

Suddenly, the engineer muttered audibly from the back: “Yeah. Right.”

Would the machine get the point?

If evaluating social media posts, how does the machine know that “Oh, sure” can sometimes mean the opposite of what it sounds like?:

That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion — either positive, negative or neutral — associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text…

Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.

University of Central Florida, “Researchers Develop Artificial Intelligence That Can Detect Sarcasm in Social Media” at Neuroscience News The paper is open access.

So, presumably, if enough people said “Yeah. Right!” or “Oh, sure.” in certain contexts in social media, the machine might report sarcasm.

We are also told,

“Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”

University of Central Florida, “Researchers Develop Artificial Intelligence That Can Detect Sarcasm in Social Media” at Neuroscience News The paper is open access.

Hmm. Sarcasm is actually quite easy to identify, both in voice and in written communications; the difficulty is what to do about it if the goal is a constructive discussion:

Walter Bradley Center director Robert J. Marks, a computer engineer, is a bit skeptical of the claim that an AI program can detect sarcasm. He says,

Humor has different strata. Satire and sarcasm are near the top while puns are near the bottom. The lowest on the totem pole is vulgarity. I have doubts that natural language processing will be able to identify humor in general.

Identifying humor, in some cases, is easy. If we see formulas like Knock Knock/Who’s there?” or the beginning of the often tasteless “Yo mama…” jokes (recently parodied by the Babylon Bee as “Yo birthing person…” jokes), we know there’s a joke coming. This paper implies that there might be similar though subtle common elements of certain types of sarcasm accessible by natural language processing.

However, as I have noted before, the ambiguity in funny flubbed headlines, like “General Who Ran Vietnam Briefly Dies at 86” and “Iraqi Head Seeks Arms” are inaccessible to AI. Humans can not only tease apart the ambiguity but can identify the correct and incorrect interpretations. Doing so looks beyond the capability of AI. We’ll see.

Analyzing humor is one thing. Synthesizing is another. There is AI like GPT-3 capable of writing short bursts of impressive prose. I will be impressed when AI writes effective comedy that is funny on purpose, not because it sounds randomly generated. I have some ideas how to do this if there’s anyone out there who wants to fund me and a grad student. 😉

The group researching AI sarcasm detection is Garibay’s Complex Adaptive Systems Lab (CASL), an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution.


You may also wish to read:

Flubbed headlines: New challenge for AI common sense. The late Paul Allen thought teachings computers common sense was a key AI goal. To help, I offer the Flubbed Headline Challenge. I propose a new challenge: Teach computers to correctly understand the headline “Students Cook and Serve Grandparents”

and

Researchers disappointed by efforts to teach AI common sense. When it comes to common sense, can the researchers really dispense with the importance of life experience? Yuchen Lin and research colleagues found that AI performs much more poorly on intuitive knowledge/common sense questions than many might expect.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can the Machine Know You Are Just Being Sarcastic?