Mind Matters Natural and Artificial Intelligence News and Analysis
retro robot orator, speaker, 3d, render

Can a New AI Debating Program Win All the Debates?

While billed as an autonomous debating system, Project Debater features very little autonomy
arroba Email

From ancient days, reason has been considered the hallmark of what sets humans apart from animals. Aristotle defined humans as the rational animal and this definition has stuck through the history of Western philosophy.

Human reason is best demonstrated in debate. Thus, if we can create programs that argue a point effectively, then computers will have conquered an important frontier of what it means to be intelligent. Recently, we learned at Nature that one research team claims to have developed such a program:

A fully autonomous computer system has been developed that can take part in live debates with people. The findings hint at a future in which artificial intelligence can help humans to formulate and make sense of complex arguments…

Developing computer systems that can recognize arguments in natural human language is one of the most demanding challenges in the field of artificial intelligence (AI). Writing in Nature, Slonim et al. report an impressive development in this field: Project Debater, an AI system that can engage with humans in debating competitions. The findings showcase how far research in this area has come, and emphasize the importance of robust engineering that combines different components, each of which handles a particular task, in the development of technology that can recognize, generate and critique arguments in debates.

Chris Reed, “Argument technology for debating with humans” at Nature The paper is closed access.

Let’s take a step back and look at this: If logic is objective, once a good argument is established, it is straightforward to codify the argument’s logic in a program. An argument is based on deductive logic and deductive logic is one thing at which computers excel.

Now here’s the getcha gotcha! While computers are good at cranking through the rules of deductive logic, they are abysmal at establishing the argument in the first place. When it comes to selecting the premises and logical steps in an argument, a human is always necessary.

Unusual robotic eye in steampunk style. Focused robot look. Background pattern close-up.

This is why Project Debater, “an AI system that can engage with humans in debating competitions,” is not as impressive as it seems. The program is billed as an autonomous debating system. But once we get into the details, there is actually very little autonomy. The system, trained on an enormous amount of existing text, makes use of many prepared scripts and knowledge bases and the allowed topics are very constrained:

“These components of the debater system are combined with information that was pre-prepared by humans, grouped around key themes, to provide knowledge, arguments and counterarguments about a wide range of topics. This knowledge base is supplemented with ‘canned’ text — fragments of sentences, pre-authored by humans — that can be used to introduce and structure a presentation during a debate…

Project Debater has addressed this obstacle using a dual-pronged approach: it has narrowed its focus to 100 or so debate topics; and it harvests its raw material from data sets that are large, even by the standards of modern language-processing systems.

Chris Reed, “Argument technology for debating with humans” at Nature The paper is closed access.

In addition, the evaluation is very subjective. I’m reminded of the “Eugene Goostman” chatbot that supposedly “passed” the Turing test test via a few simple conversational tricks. By avoiding questions, and putting the human judges on the defensive, the chatbot’s programmer manipulated the questioners into thinking the bot was a human — which completely defeats the purpose of the Turing test to detect true intelligence. In the same way, the Project Debater engine is evaluated by an untrained audience as to whether it was “exemplifying a decent performance”’, a hopelessly vague criterion that — in today’s world of Twitter wars and drive-by YouTube commenters — is a very low bar to pass, relative to, say, historic debates like the Lincoln–Douglas debates around slavery (1858).

The great hope behind this project is to counter “wildfires of fake news, the polarization of public opinion and the ubiquity of lazy reasoning.” But will Project Debater help? A major cause of a lack of trust in the institutional authorities today is mechanisms like Project Debater where canned scripts are plastered in front of our faces by an apathetic and disengaged, yet agenda-driven, media. Far from a solution to the problem of mindless elites trying to program the masses, Project Debater is a sardonic image of what our intellectual leadership have become: a mechanical farce trying to make others believe things they themselves no longer even understand.

You may also wish to read:

Can AI write the great American novel? Or compose sports news? It’s a split decision, say Rensselaer prof Selmer Bringsjord and Baylor computer engineering prof Robert J. Marks. Computers win games where the rules are strictly defined. Great novels require creativity in the face of situations that are only partly definable.


A conversation bot is cool — if you really lower your standards — A system that supposedly generates conversation—but have you noticed what it says? Bartlett: you could also ask “Who was President in 1600,” and it would give you an answer, not recognizing that the United States didn’t exist in 1600.

Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Can a New AI Debating Program Win All the Debates?