“The questions asked were those that an elementary school student should be easily able to understand and respond to. As such, if these voice assistants were in school, they’d all get a failing grade.Ron Schmelzer, “Tests Show That Voice Assistants Still Lack Critical Intelligence” at Forbes
The test uses 120 questions spanning 12 categories, of varying difficulty. The researchers ranked the results from Category 0 (no answer at all) to Category 3 (a clear, straightforward answer). Because categories 0 through 2 effectively define inadequate responses, the best any assistant achieved was just shy of 35% adequate responses.
As their report says, “Voice assistants have a long way to go before even half of the responses are acceptable.” (Emphasis in the original)
Creating an effective voice assistant is hard work that must solve a range of problems:
● The machine must distinguish between noise and the voice.
● It must convert the voice into words.
● It must, then, interpret the question.
● It must determine how best to answer the question.
● It must get an effective answer.
● And, it must do all this before the user gets impatient waiting.
This process is not easy and it draws from many subfields in both AI and computer science. I am impressed that voice assistants work as well as they do.
But creating an effective voice assistant is many times easier than solving other AI problems such as, for example, creating a self-driving car. Consider the complexities:
● Questions tend to cluster and will be generated repeatedly by multiple users. No situation on the road is identical to a previous situation.
● The digital assistant need only decipher a single data stream (the incoming voice). A self-driving car must simultaneously interpret multiple data streams from a variety of sensors, each of which has its own strengths and weaknesses.
● Self-driving car problems are open-ended; each situation and its response may be unique. A voice assistant draws, comparatively, from a smaller data set—what you might search for in a familiar world of known question and answers).
● Finally, compared to the speed at which situations change on the road, voice assistants can take their time (several seconds, maybe). Self-driving cars must always be able to respond in less than a second.
These test results can critically inform our trust in AI: We should not trust AI without clear, tested data to show it performs reliably at the level needed to secure confidence. We ask that much from people. We must ask it from the machines we build.
So, the next time Alex—or Siri, or Cortana, or Google—gives you a goofy answer, when you’re done laughing, remember what it’s telling you: AI, while useful, is far from all that is needed in many situations. And we should not let the techno-glitterati to fool us into believing otherwise.
You might also enjoy: If you think common sense is easy to acquire, try teaching it to a state-of-the-art self-driving car. Start with snowmen. (Brendan Dixon)