Excellent article by .
Amazon’s Echo and Google’s Home are the two most compelling products in the new smart-speaker market. It’s a fascinating space to watch, for it is of substantial strategic importance to both companies as well as several more that will enter the fray soon. Why is this? Whatever device you outfit your home with will influence many downstream purchasing decisions, from automation hardware to digital media and even to where you order dog food. Because of this strategic importance, the leading players are investing vast amounts of money to make their product the market leader.
These devices have a broad range of functionality, most of which is not discussed in this article. As such, it is a review not of the devices overall, but rather simply their function as answer engines. You can, on a whim, ask them almost any question and they will try to answer it. I have both devices on my desk, and almost immediately I noticed something very puzzling: They often give different answers to the same questions. Not opinion questions, you understand, but factual questions, the kinds of things you would expect them to be in full agreement on, such as the number of seconds in a year.
How can this be? Assuming they correctly understand the words in the question, how can they give different answers to the same straightforward questions? Upon inspection, it turns out there are ten reasons, each of which reveals an inherent limitation of artificial intelligence as we currently know it…
Addendum to the Article:
As someone who has worked with Artificial Intelligence in some shape or form for the last 20 years, I’d like to throw in my commentary on the article.
- Human Utterances and their Correlation to Goal / Intent Recognition. There are innumerable ways to ask for something you want. The ‘ask’ is a ‘human utterance’ which should trigger the ‘goal / intent’ of what knowledge the person is requesting. AI Chat Bots, digital agents, have a table of these utterances which all roll up to a single goal. Hundreds of utterances may be supplied per goal. In fact, Amazon has a service, Mechanical Turk, the Artificial Artificial Intelligence, which you may “Ask workers to complete HITs – Human Intelligence Tasks – and get results using Mechanical Turk”. They boast access to a global, on-demand, 24 x 7 workforce to get thousands of HITs completed in minutes. There are also ways in which the AI Digital Agent may ‘rephrase’ what the AI considers utterances that are closely related. Companies like IBM look toward human recognition, accuracy of comprehension as 95% of the words in a given conversation. On March 7, IBM announced it had become the first to hone in on that benchmark, having achieved a 5.5% error rate.
- Algorithmic ‘weighted’ Selection verses Curated Content. It makes sense based on how these two companies ‘grew up’, that Amazon relies on their curated content acquisitions such as Evi, a technology company which specialises in knowledge base and semantic search engine software. Its first product was an answer engine that aimed to directly answer questions on any subject posed in plain English text, which is accomplished using a database of discrete facts. “Google, on the other hand, pulls many of its answers straight from the web. In fact, you know how sometimes you do a search in Google and the answer comes up in snippet form at the top of the results? Well, often Google Assistant simply reads those answers.” Truncated answers equate to incorrect answers.
- Instead of a direct Q&A style approach, where a human utterance, question, triggers an intent/goal [answer], a process by which ‘clarifying questions‘ maybe asked by the AI digital agent. A dialog workflow may disambiguate the goal by narrowing down what the user is looking for. This disambiguation process is a part of common technique in human interaction, and is represented in a workflow diagram with logic decision paths. It seems this technique may require human guidance, and prone to bias, error and additional overhead for content curation.
- Who are the content curators for knowledge, providing ‘factual’ answers, and/or opinions? Are curators ‘self proclaimed’ Subject Matter Experts (SMEs), people entitled with degrees in History? or IT / business analysts making the content decisions?
- Questions requesting opinionated information may vary greatly between AI platform, and between questions within the same AI knowledge base. Opinions may offend, be intentionally biased, sour the AI / human experience.