Tag Archives: AI

Uncommon Opportunity? R&D Conversational AI Engineer

I had to share this opportunity.  The Conversational AI Engineer role will continue to be in demand for some time.


Title: R&D Conversational AI Engineer
Location: Englewood Cliffs, NJ
Duration: 6+ months Contract(with Possible extension)

Responsibilities:

  • Create Alexa Skills, Google Home Actions, and chatbots for various direct Client’s brands and initiatives.
  • Work with the Digital Enterprises group to create production-ready conversational agents to help Client emerge in the connected life space.
  • Create additional add-ons to the conversational agents
  • Work with new technologies not be fully documented yet
  • Work with startups and their technology emerging in the connected life space.

Quals–
Client is looking for a developer in conversational AI and bot development.

What is Media Labs?   Media Labs is dedicated to driving a collaborative culture of innovation across all of Clients . We serve as an internal incubator and accelerator for emerging technology and are leading the way with fresh ideas to ignite the future of media and storytelling. We are committed to partnering with another telecom giant, startups, research and academic groups, content creators and brands to further innovation at client. One of our main themes is connected life and we are looking for an engineer to lead this development.

Requirements for R&D Engineer: –

  • Bachelor in Computer Science, Engineering, or other related field
  • Experience working with new technologies that may not be fully documented yet
  • Experience communicating technology to non-technical people
  • Experience with AWS (Lambda, CloudWatch, S3, API Gateway, etc)
  • Experience with JavaScript, Node.js
  • Some experience creating Alexa Skills, Google Home Actions, or chatbots

Optional Requirements:

  • Experience creating iOS or Android applications (native or non-native)
  •  Experience with API.AI or another NLP engine (Lex, Watson Conversation)

Amazon’s Alexa vs. Google’s Assistant: Same Questions, Different Answers

Excellent article by  .

Amazon’s Echo and Google’s Home are the two most compelling products in the new smart-speaker market. It’s a fascinating space to watch, for it is of substantial strategic importance to both companies as well as several more that will enter the fray soon. Why is this? Whatever device you outfit your home with will influence many downstream purchasing decisions, from automation hardware to digital media and even to where you order dog food. Because of this strategic importance, the leading players are investing vast amounts of money to make their product the market leader.

These devices have a broad range of functionality, most of which is not discussed in this article. As such, it is a review not of the devices overall, but rather simply their function as answer engines. You can, on a whim, ask them almost any question and they will try to answer it. I have both devices on my desk, and almost immediately I noticed something very puzzling: They often give different answers to the same questions. Not opinion questions, you understand, but factual questions, the kinds of things you would expect them to be in full agreement on, such as the number of seconds in a year.

How can this be? Assuming they correctly understand the words in the question, how can they give different answers to the same straightforward questions? Upon inspection, it turns out there are ten reasons, each of which reveals an inherent limitation of artificial intelligence as we currently know it…


Addendum to the Article:

As someone who has worked with Artificial Intelligence in some shape or form for the last 20 years, I’d like to throw in my commentary on the article.

  1. Human Utterances and their Correlation to Goal / Intent Recognition.  There are innumerable ways to ask for something you want.  The ‘ask’ is a ‘human utterance’ which should trigger the ‘goal / intent’ of what knowledge the person is requesting.  AI Chat Bots, digital agents, have a table of these utterances which all roll up to a single goal.  Hundreds of utterances may be supplied per goal.  In fact, Amazon has a service, Mechanical Turk, the Artificial Artificial Intelligence, which you may “Ask workers to complete HITs – Human Intelligence Tasks – and get results using Mechanical Turk”.   They boast access to a global, on-demand, 24 x 7 workforce to get thousands of HITs completed in minutes.  There are also ways in which the AI Digital Agent may ‘rephrase’ what the AI considers utterances that are closely related.  Companies like IBM look toward human recognition, accuracy of comprehension as 95% of the words in a given conversation.  On March 7, IBM announced it had become the first to hone in on that benchmark, having achieved a 5.5% error rate.
  2. Algorithmic ‘weighted’ Selection verses Curated Content.   It makes sense based on how these two companies ‘grew up’, that Amazon relies on their curated content acquisitions such as Evi,  a technology company which specialises in knowledge base and semantic search engine software. Its first product was an answer engine that aimed to directly answer questions on any subject posed in plain English text, which is accomplished using a database of discrete facts.   “Google, on the other hand, pulls many of its answers straight from the web. In fact, you know how sometimes you do a search in Google and the answer comes up in snippet form at the top of the results? Well, often Google Assistant simply reads those answers.”  Truncated answers equate to incorrect answers.
  3. Instead of a direct Q&A style approach, where a human utterance, question, triggers an intent/goal , a process by which ‘clarifying questions‘ maybe asked by the AI digital agent.  A dialog workflow may disambiguate the goal by narrowing down what the user is looking for.  This disambiguation process is a part of common technique in human interaction, and is represented in a workflow diagram with logic decision paths. It seems this technique may require human guidance, and prone to bias, error and additional overhead for content curation.
  4. Who are the content curators for knowledge, providing ‘factual’ answers, and/or opinions?  Are curators ‘self proclaimed’ Subject Matter Experts (SMEs), people entitled with degrees in History?  or IT / business analysts making the content decisions?
  5. Questions requesting opinionated information may vary greatly between AI platform, and between questions within the same AI knowledge base.  Opinions may offend, be intentionally biased, sour the AI / human experience.

Evaluating fobi.io Chatbot Powered By Google Forms: AI Digital Agent?

Interesting approach to an AI Chatbot implementation.  The business process owner creates one or more Google Forms containing questions and answers, and converts/deploys to a chatbot using fobi.io.  All the questions for [potential] customers/users are captured in a multitude of forms.  Without any code, and within minutes, an interactive chatbot can be produced and deployed for client use.

The trade off for rapid deployment and without coding is a rigid approach of triggering user desired “Goal/Intents”.  It seems a single goal/intent is mapped to a single Google Form.  As opposed to a digital agent, which leverages utterances to trigger the user’s intended goal/intent.  Before starting the chat, the user must select the appropriate Google Form, with the guidance of the content curator.

Another trade off is, it seems, no integration on the backend to execute a business process, essential to many chatbot workflows. For example, given an Invoice ID, the chatbot may search in a transactional database, then retrieve and display the full invoice.  Actually, I may be incorrect. On the Google Forms side, there is a Script Editor. Seems powerful and scary all at the same time.

Another trade off that seems to exist, more on the Google Forms side, is building not just a Form with a list of Questions, but a Consumer Process Workflow, that allows the business to provide an interactive dialog based on answers users provide.  For example, a Yes/No or multichoice answer may lead to alternate sets of questions [and actions].  It doesn’t appear there is any workflow tool provided to structure the Google Forms / fobi.io chatbot Q&A.

However, there are still many business cases for the product, especially for small to mid size organizations.

* Business Estimates – although there is no logic workflow to guide the Q&A sessions with [prospective] customers, the business still may derive the initial information they require to make an initial assessment.  It seems a Web form, and this fobi.io / Google Forms solution seems very comparable in capability, its just a change in the median in which the user interacts to collect the information.

One additional note, Google Forms is not a free product.  Looks like it’s a part of the G Suite. Free two week trial, then the basic plan is $5 per month, which comes with other products as well.  Click here for pricing details.

Although this “chatbot” tries to quickly provide a mechanism to turn a form to a chatbot, it seems it’s still just a form at the end of the day.  I’m interested to see more products from Zoi.ai soon

Evaluating Amazon Lex – AI Digital Agent / Assistant Implementation

Evaluating AI chatbot solutions for:

  • Simple to Configure – e.g. Wizard Walkthrough
  • Flexible, and Mature Platform e.g. Executing backend processes
  • Cost Effective and Competitive Solutions
  • Rapid Deployment to XYZ platforms

The idea is almost anyone can build and deploy a chat bot for your business, small to midsize organizations.

Amazon Lex

Going through the Amazon Lex build chat process, and configuration of the Digital Assistant was a breeze.  AWS employs a ‘wizard’ style interface to help the user build the Chatbot / Digital Agent.  The wizard guides you through defining Intents, Utterances, Slots, and Fulfillment.

  • Intents – A particular goal that the user wants to achieve (e.g. book an airline reservation)
  •  Utterances – Spoken or typed phrases that invoke your intent
  • Slots – Data the user must provide to fulfill the intent
  • Prompts – Questions that ask the user to input data
  • Fulfillment – The business logic required to fulfill the user’s intent (i.e. backend call to another system, e.g. SAP)
Amazon Lex Chabot
Amazon Lex Chabot

The Amazon Lex Chatbot editor is also extremely easy to use, and to update / republish any changes.

Amazon Chat Bot Editor
Amazon Chat Bot Editor

The challenge with Amazon Lex appears to be a very limiting ability for chatbot distribution / deployment.  Your Amazon Lex Chatbot is required to use one of three methods to deploy: Facebook, Slack, or Twilio SMS.  Facebook is limiting in a sense if you do not want to engage your customers on this platform.   Slack is a ‘closed’ framework, whereby the user of the chat bot must belong to a Slack team in order to communicate.  Finally, Twilio SMS implies use of your chat bot though a mobile phone SMS.

Amazon Chatbot Channels
Amazon Chatbot Channels

 

I’ve reached out to AWS Support regarding any other options for Amazon Lex chatbot deployment.  Just in case I missed something.

Amazon Chatbot Support
Amazon Chatbot Support

There is a “Test Bot” in the lower right corner of the Amazon Lex, Intents menu.  The author of the business process can, in real-time, make changes to the bot, and test them all on the same page.

Amazon Chatbot, Test Bot
Amazon Chatbot, Test Bot

 

Key Followups

  • Is there a way to leverage the “Test Bot” as a “no frills” Chatbot UI,  and embed it in an existing web page?  Question to AWS Support.
  • One concern is for large volumes of utterances / Intents and slots. An ideal suggestion would allow the user a bulk upload through an Excel spreadsheet, for example.
  • I’ve not been able to utilize the Amazon Lambda to trigger server side processing.
  • Note: there seem to be several ‘quirky’ bugs in the Amazon Lex UI, so it may take one or two tries to workaround the UI issue.

IBM Watson Conversation also contends for this Digital Agent / Assistant space, and have a very interesting offering including dialog / workflow definition.

Both Amazon Lex and IBM Watson Conversation are FREE to try, and in minutes, you could have your bots created and deployed. Please see sites for pricing details.

Beyond Google Search of Personal Data – Proactive, AI Digital Assistant 

As per previous Post, Google Searches Your Personal Data (Calendar, Gmail, Photos), and Produces Consolidated Results, why can’t the Google Assistant take advantage of the same data sources?

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products.  The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step,  Google’s “Personal”  Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data.  No doubt other Google products are coming soon.

Big benefits are not just for the consumer to  search through their Personal Goggle data, but provide that consolidated view to the AI Assistant.  Does the Google [Digital] Assistant already have access to Google Keep data, for example.  Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage?  If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

Note: The “Google App” for mobile platforms does:

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW).  Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel.  However, the execution of goal based rules would need to be managed.  The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

Bass Fishing Trip
Bass Fishing Trip

For example,

  • The proactive Google Assistant may suggest to book your annual fishing trip soon.  Elevated Interaction to Consumer / User.
  • The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search.   Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category.  Building workflows that are ‘predefined’ for a core set of goals/rules.
  • AI Assistant may search user’s photo archive on the server side.   Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
  • Photos from around ‘August’ may be earmarked for Assistant use
  • Photos may be geo tagged,  e.g. Lake Champlain, which is known for its fishing.
  •  All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository,  goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
  • In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process.  Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” .  Or, food may not be the best place to start, i.e. Amazon Dash

 

Kosher ‘Like’ Certifications and Oversight of Autonomous Vehicle Implementations

Do AI Rules Engines “deliberate” any differently between rules with moral weight over none at all. Rhetorical..?

The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and “business” input. In addition, implementations may vary between manufacturers and countries.

In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role?  Vehicle Insurance companies?  Car insurance will be rewritten when it comes to autonomous cars.  Some cars may have a  higher deductible or the cost of the policy may rise based upon the autonomous implementation.

Conditions Under Consideration:

1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?

1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?

1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?

The more sensors, the more input to the decision process.

Microsoft to Release AI Digital Agent SDK Integration with Visio and Deploy to Bing Search

Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’.  In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.

As a business, Digital Agent subscriber,  Microsoft Bing  search results will contain the business’ AI Digital Assistant created using Visio.  The ‘Chat’ link will invoke the business’ custom Digital Agent.  The Agent has the ability to answer business questions, or lead the user through “complex”, workflows.  For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server.  Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Chat
Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility.  Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

  • The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
  • Microsoft may charge the business a “per chat interaction”  fee model, either per chat, or bundles with discounts based on volume.
  • In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant?  Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge.  The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

  • basic knowledge such as store hours of operation
  • more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”.  Many “how to” articles and videos do exist on the Internet already through blogs or youtube.    The AI digital assistant “curator of knowledge”  may ‘recommended’ existing content, or provide their own content.
  • Proprietary information can be disclosed in a narrative using the AI digital agent, e.g.  My order number is 123456B.  What is the status of my order?
  • Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed.  I would like to recommend her.  E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file.  Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature.  When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.

Intent Recognition: AI Digital Agents’ Best Ways to Interpret User Goals

Goal / Intent recognition may be the most difficult aspect of the AI Digital Agent’s workload, and not Natural language processing (NLP) or Voice Recognition.

Challenges of the Digital Agent
  • Many goals with very similar human utterance / syntax exist.
  • Just like with humans trying to interpret human utterances, many possibilities exist, and misinterpretation occurs.
  • Meeting someone for the first time, without historical context places additional burden on the interpreter of the intent.
  • There are innumerable opportunities to ask the same question, to request information, all achieving a similar, or the same goal.
Opportunities for Goal / Intent Accuracy
  • Business Process Workflows  may enable a very broad ‘category’ of subject matter to be disambiguated as the user traverses the workflow.  The intended goal may be derived from asking ‘narrowing’ questions, until the ‘goal’ is reached, or the user ‘falls out’ of the workflow.
  • Methodologies such as leveraging Regex to interpret utterances are difficult to create and maintain.
  • Utterances are still a necessity, their structure, and correlation to Business Process Workflows.  However, as the knowledge base grows, so does the complexity of curation of the content.   A librarian, or Content Curator may be required to integrate new information, deprecate stale content, and update workflows.
Ongoing, Partnership between Digital Agent and Human
  • Business Process Workflows may be initially designed and implemented by Subject Matter Experts (SMEs).  However, the SMEs might not have predicted all possible valid variations of the workflow, and achieve a different outcome for the triggered goal.
  • As the user traverses a workflow, they may encounter a limiting boundary, such as a Boolean question, which should have more than two options.  Some digital assistants may enable a user to walk on an alternate path by leveraging ‘human assisted’ goal achievement, such as escalation of a chat.  The ‘human assisted’ path may now have a third option, and this new option may be added to the Business Process Workflow for future use.

AI Whispering Digital Co-Counsel for Any Litigation

Are you adequately prepared for your next litigation?  Going into court with an army of Co-Counsel making you feel more confident, more prepared?  Make sure you bring along the AI Whispering Digital Co-Counsel.  Co-Counsel that doesn’t break a sweat, get nervous, and is always prepared.  He even takes the opportunity to learn while on the job, machine learning.

The whispering digital agent for advising litigators “just-in-time” rebuttal citing historical precedence, for example.  Digital Co-Counsel analyzes the dialog within the courtroom to identify ‘goals’, the intent of the conversation(s).  The Digital Co-Counsel identifies the current workflow, which may be identified as Cross or Direct examination, Opening Statement, and Closing Argument.

Realtime observation of a court case and advice based on:
  • Observed dialog interactions between all parties involved in the case, such as opposing counsel,  witnesses, subject matter experts, may trigger “guidance” from the Digital Co-Counsel based on a compound of utterances, and identified workflow.
  • Court case evidence submitted may be digitized, and analyzed based on a [predetermined]combination of identified attributes of submitted evidence.  This evidence, in turn, may be rebutted, by counter arguments, alternate ‘perspectives’ or present “evidence” to rebut
  • The introduction of ‘bias’ toward the opposing council.**

Implementation of the Digital Co-Council may be through a Smartphone application, and use a bluetooth throughout the case.

My opinions are my own, and do not necessarily reflect my employer’s viewpoint.

AI Email Workflows Eliminate Need for Manual Email Responses

When i read the article “How to use Gmail templates to answer emails faster.”  I thought wow, what an 1990s throwback!

Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.

Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.

Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.

Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.

Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.

Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors.  Workaround, of course, using service accounts with vastly higher folder quota / size.

My opinions do not reflect that of my employer.