Tag Archives: Cortana

Amazon and Microsoft Drinking their own AI Chatbot Champagne?

A relatively new medium of support for businesses small to global conglomerates becomes available based on the exciting yet  embryonic [Chabot] / Digital Agent services.   Amazon and Microsoft, among others, are diving into this transforming space.  The coat of paint is still wet on Amazon Lex and Microsoft Cortana Skills.   MSFT Cortana Skills Kit is not yet available to any/all developers, but has been opened to a select set of partners, enabling them to expand Cortana’s core knowledge set.  Microsoft’s Bot Framework is in “Preview”  phase.  However, the possibilities are extensive, such as another tier of support for both of these companies, if they turn on their own knowledge repositories using their respective Digital Agents [Chabot]  platforms.

Approach from Inception to Deployment

  • The curation and creation of knowledge content may occur with the definition of ‘Goals/Intents’ and their correlated human utterances which trigger the Goal Question and Answer (Q&A) dialog format.  Classic Use Case.  The question may provide an answer with text, images, and video.
  • Taking Goals/Intents and Utterances to ‘the next level’ involves creating / implementing Process Workflows (PW).    A workflow may contain many possibilities for the user to reach their goal with a single utterance triggered.  Workflows look very similar to what you might see in a Visio diagram, with multiple logical paths. Instead of presenting users with the answer based upon the single human utterance, the question, the workflow navigates the users through a narrative to:
    • disambiguate the initial human utterance, and get a better understanding of the specific user goal/intention.  The user’s question to the Digital Agent may have a degree of ambiguity, and workflows enable the AI Digital Agent to determine the goal through an interactive dialog/inspection.   The larger the volume of knowledge, and the closer the goals/intentions, the implementation would require disambiguation.
    • interactive conversation / dialog with the AI Digital Agent, to walk through a process step by step, including text, images, and Video inline with the conversation.  The AI chat agent may pause the ‘directions’ waiting for the human counterpart to proceed.

Future  Opportunities:

  • Amazon to provide billing and implementation / technical support for AWS services through a customized version of their own AWS Lex service?   All the code used to provide this Digital Agent / Chabot maybe ‘open source’ for those looking to implement similar [enterprise] services.
  • Digital Agent may allow the user to share their screen, OCR the current section of code from an IDE, and perform a code review on the functions / methods.
  • Microsoft has an ‘Online Chat’ capability for MSDN.  Not sure how extensive the capability is, and if its a true 1:1 chat, which they claim is a 24/7 service. Microsoft has libraries of content from Microsoft Docs, MSDN, and TechNet.  If the MSFT Bot framework has the capability to ingest their own articles,  users may be able to trigger these goals/intents from utterances, similar to searching for knowledge base articles today.
  • Abstraction, Abstraction, Abstraction.  These AI Chatbot/Digital Agents must float toward Wizards to build and deploy, and attempt to stay away from coding.  Elevating this technology to be configurable by a business user.  Solutions have significant possibilities for small companies, and this technology needs to reach their hands.  It seems that Amazon Lex is well on their way to achieving the wizard driven creation / distribution, but have ways to go.  I’m not sure if the back end process execution, e.g. Amazon Lambda, will be abstracted any time soon.

Microsoft to Release AI Digital Agent SDK Integration with Visio and Deploy to Bing Search

Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’.  In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.

As a business, Digital Agent subscriber,  Microsoft Bing  search results will contain the business’ AI Digital Assistant created using Visio.  The ‘Chat’ link will invoke the business’ custom Digital Agent.  The Agent has the ability to answer business questions, or lead the user through “complex”, workflows.  For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server.  Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Chat
Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility.  Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

  • The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
  • Microsoft may charge the business a “per chat interaction”  fee model, either per chat, or bundles with discounts based on volume.
  • In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant?  Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge.  The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

  • basic knowledge such as store hours of operation
  • more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”.  Many “how to” articles and videos do exist on the Internet already through blogs or youtube.    The AI digital assistant “curator of knowledge”  may ‘recommended’ existing content, or provide their own content.
  • Proprietary information can be disclosed in a narrative using the AI digital agent, e.g.  My order number is 123456B.  What is the status of my order?
  • Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed.  I would like to recommend her.  E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file.  Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature.  When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.

AI Digital Assistant verse Search Engines

Aren’t AI Digital Assistants just like Search Engines? They both try to recognize your question or human utterance as best as possible to serve up your requested content. E.g.classic FAQ. The difference in the FAQ use case is the proprietary information from the company hosting the digital assistant may not be available on the internet.

Another difference between the Digital Assistant and a Search Engine is the ability of the Digital Assistant to ‘guide’ a person through a series of questions, enabling elaboration, to provide the user a more precise answer.

The Digital Assistant may use an interactive dialog to guide the user through a process, and not just supply the ‘most correct’ responses. Many people have flocked to YouTube for instructional type of interactive medium. When multiple workflow paths can be followed, the Digital Assistant has the upper hand.

The Digital Assistant has the capability of interfacing with 3rd parties (E.g. data stores with API access). For example, there may be a Digital Assistant hosted by Medical Insurance Co that has the ability to not only check the status of a claim, but also send correspondence to a medical practitioner on your behalf. A huge pain to call the insurance company, then the Dr office, then the insurance company again. Even the HIPPA release could be authenticated in real time, in line during the chat.  A digital assistant may be able to create a chat session with multiple participants.

Digital Assistants overruling capabilities over Search Engines are the ability to ‘escalate’ at any time during the Digital Assistant interaction. People are then queued for the next available human agent.

There have been attempts in the past, such as Ask.com (originally known as Ask Jeeves) is a question answering-focused e-business.  Google Questions and Answers (Google Otvety, Google Ответы) was a free knowledge market offered by Google that allowed users to collaboratively find good answers, through the web, to their questions (also referred as Google Knowledge Search).

My opinions are my own, and do not reflect my employer’s viewpoint.

AI Personal Assistant Needs Remedial Guidance for their Users

Providing Intelligent ‘Code’ Completion

At this stage in the application platform growth and maturity of the AI Personal Assistant, there are many commands and options that common users cannot formulate due to a lack of knowledge and experience.  Using Natural Language to formulate questions has gotten better over the years, but assistance / guidance formulating the requests would maximize intent / goal accuracy.

A key usability feature for many integrated development environments (IDE) are their capability to use “Intelligent Code Completion” to guide their programmers to produce correct, functional syntax. This feature also enables the programmer to be unburdened by the need to look up syntax for each command reference, saving significant time.  As the usage of the AI Personal Assistant grows, and their capabilities along with it, the amount of commands and their parameters required to use the AI Personal Assistant will also increase.

AI Leveraging Intelligent Command Completion

For each command parameter [level\tree], a drop down list may appear giving users a set of options to select for the next parameter. A delimiter such as a period(.) indicates to the AI Parser another set of command options must be presented to the person entering the command. These options are typically in the form of drop down lists concatenated to the right of the formulated commands.  Vocally, parent / child commands and parameters may be supplied in a similar fashion.

AI Personal Assistant Language Syntax

Adding another AI parser on top of the existing syntax parser may allow commands like these to be executed:

  • Abstraction (e.g. no application specified)
    • Order.Food.Focacceria.List123
    • Order.Food.FavoriteItalianRestaurant.FavoriteLunchSpecial
  • Application Parser
    • Seamless.Order.Food.Focacceria.Large Pizza

These AI command examples uses a hierarchy of commands and parameters to perform the function. One of the above commands leverages one of my contacts, and a ‘List123’ object.  The ‘List123’ parameter may be a ‘note’ on my Smartphone that contains a list of food we would like to order. The command may place the order either through my contact’s email address, fax number, or calling the business main number and using AI Text to Speech functionality.

All personal data, such as Favorite Italian Restaurant,  and Favorite Lunch Special could be placed in the AI Personal Assistant ‘Settings’.  A group of settings may be listed as Key-Value pairs,  that may be considered short hand for conversations involving the AI Assistant.

A majority of users are most likely unsure of many of the options available within the AI Personal assistant command structure. Intelligent command [code] completion empowers users with visibility into the available commands, and parameters.

For those without a programming background, Intelligent “Command” Completion is slightly similar to the autocomplete in Google’s Search text box, predicting possible choices as the user types. In the case of the guidance provided by an AI Personal Assistant the user is guided to their desired command; however, the Google autocomplete requires some level or sense of the end result command. Intelligent code completion typically displays all possible commands in a drop down list next to the constructor period (.). In this case the user may have no knowledge of the next parameter without the drop down choice list.  An addition feature enables the AI Personal Assistant to hover over one of the commands\parameters to show a brief ‘help text’ popup.

Note, Microsoft’s Cortana AI assistant provides a text box in addition to speech input.  Adding another syntax parser could be allowed and enabled through the existing User Interface.  However, Siri seems to only have voice recognition input, and no text input.

Is Siri handling the iOS ‘Global Search’ requests ‘behind the scenes’?  If so, the textual parsing, i.e. the period(.) separator would work. Siri does provide some cursory guidance on what information the AI may be able to provide,  “Some things you can ask me:”

With only voice recognition input, use the Voice Driven Menu Navigation & Selection approach as described below.

Voice Driven, Menu Navigation and Selection

The current AI personal assistant, abstraction layer may be too abstract for some users.  The difference between these two commands:

  • Play The Rolling Stones song Sympathy for the Devil.
    • Has the benefit of natural language, and can handle simple tasks, like “Call Mom”
    • However, there may be many commands that can be performed by a multitude of installed platform applications.

Verse

  • Spotify.Song.Sympathy for the Devil
    • Enables the user to select the specific application they would like a task to be performed by.
  • Spotify Help
    • A voice driven menu will enable users to understand the capabilities of the AI Assistant.    Through the use of a voice interactive menu, users may ‘drill down’ to the action they desire to be performed. e.g. “Press # or say XYZ”
    • Optionally, the voice menu, depending upon the application, may have a customer service feature, and forward the interaction to the proper [calling or chat] queue.

Update – 9/11/16

  • I just installed Microsoft Cortana for iOS, and at a glance, the application has a leg up on the competition
    • The Help menu gives a fair number of examples by category.  Much better guidance that iOS / Siri 
    • The ability to enter\type or speak commands provides the needed flexibility for user input.
      • Some people are uncomfortable ‘talking’ to their Smartphones.  Awkward talking to a machine.
      • The ability to type in commands may alleviate voice command entry errors, speech to text translation.
      • Opportunity to expand the AI Syntax Parser to include ‘programmatic’ type commands allows the user a more granular command set,  e.g. “Intelligent Command Completion”.  As the capabilities of the platform grow, it will be a challenge to interface and maximize AI Personal Assistant capabilities.

Building AI Is Hard—So Facebook Is Building AI That Builds AI

“…companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art [AI] forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.”


This article represents a true picture of where we are today for the average consumer and producer of information, and the companies that repurpose information, e.g. in the form of advertisements.  
The advancement and current progress of Artificial Intelligence, Machine Learning, analogously paints a picture akin to the 1970s with computers that fill rooms, and accept punch cards as input.
Today’s consumers have mobile computing power that is on par to the whole rooms of the 1970s; however, “more compute power” in a tinier package may not be the path to AI sentience.  How AI algorithm models are computed might need to take an alternate approach.  
In a classical computation system, a bit would have to be in one state or the other. However quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
The construction, and validation of Artificial Intelligence, Machine Learning, algorithm models should be engineered on a Quantum Computing framework.

AI Personal Assistants are “Life Partners”

Artificial Intelligent (AI)  “Assistants”, or “Bots” are taken to the ‘next level’ when the assistant becomes a proactive entity based on the input from human intelligent experts that grows with machine learning.

Even the implication of an ‘Assistant’ v.  ‘Life Partner’ implies a greater degree of dynamic, and proactive interaction.   The cross over to becoming ‘Life Partner’ is when we go ‘above and beyond’ to help our partners succeed, or even survive the day to day.

Once we experience our current [digital, mobile] ‘assistants’ positively influencing our lives in a more intelligent, proactive manner, an emotional bond ‘grows’, and the investment in this technology will also expand.

Practical Applications Range:

  • Alcoholics Anonymous Coach , Mentor – enabling the human partner to overcome temporary weakness. Knowledge,  and “triggers” need to be incorporated into the AI ‘Partner’;  “Location / Proximity” reminder if person enters a shopping area that has a liquor store.  [AI] “Partner” help “talk down”
  • Understanding ‘data points’ from multiple sources, such as alarms,  and calendar events,  to derive ‘knowledge’, and create an actionable trigger.
    • e.g. “Did you remember to take your medicine?” unprompted; “There is a new article in N periodical, that pertains to your medicine.  Would you like to read it?”
    • e.g. 2 unprompted, “Weather calls for N inches of Snow.  Did you remember to service your Snow Blower this season?”
  • FinTech – while in department store XYZ looking to purchase Y over a certain amount, unprompted “Your credit score indicates you are ‘most likely’ eligible to ‘sign up’ for a store credit card, and get N percentage off your first purchase”  Multiple input sources used to achieve a potential sales opportunity.

IBM has a cognitive cloud of AI solutions leveraging IBM’s Watson.  Most/All of the 18 web applications they have hosted (with source) are driven by human interactive triggers, as with the “Natural Language Classifier”, which helps build a question-and-answer repository.

There are four bits that need to occur to accelerate adoption of the ‘AI Life Partner’:

  1. Knowledge Experts, or Subject Matter Experts (SME) need to be able to “pass on” their knowledge to build repositories.   IBM Watson Natural Language Classifier may be used.
  2. The integration of this knowledge into an AI medium, such as a ‘Digital Assistant’ needs to occur with corresponding ‘triggers’ 
  3. Our current AI ‘Assistants’ need to become [more] proactive as they integrate into our ‘digital’ lives, such as going beyond the setting of an alarm clock, hands free calling, or checking the sports score.   Our [AI] “Life Partner” needs to ‘act’ like buddy and fan of ‘our’ sports team.  Without prompting, proactively serve up knowledge [based on correlated, multiple sources], and/or take [acceptable] actions.
    1. E.g. FinTech – “Our schedule is open tonight, and there are great seats available, Section N, Seat A for ABC dollars on Stubhub.  Shall I make the purchase?”
      1. Partner with vendors to drive FinTech business rules.
  4. Take ‘advantage’ of more knowledge sources, such as the applications we use that collect our data.  Use multiple knowledge sources in concert, enabling the AI to correlate data and propose ‘complex’ rules of interaction.

Our AI ‘Life Partners’ may grow in knowledge, and mature the relationship between man and machine.   Incorporating derived rules leveraging machine learning, without input of a human expert, will come with risk and reward.