Mind Matters Natural and Artificial Intelligence News and Analysis

TagTuring Test (and chatbots)

roboter-auf-tastatur-methapher-fur-chatbot-socialbot-algorithmen-und-kunstliche-intelligenz-stockpack-adobe-stock
Roboter auf Tastatur, Methapher für Chatbot / Socialbot, Algorithmen und künstliche Intelligenz

Marks: Artificial Intelligence Is No More Creative Than a Pencil

You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while

(Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) by Robert J. Marks is available here.) Some have claimed AI is creative. But “creativity” is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. In this and subsequent chapters we will explore what creativity is, and in the end it will become clear that, properly defined, AI is no more creative than a pencil. Creativity: Originating Something New Lady Ada Lovelace (1815–1852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that…

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better…