Beyond Google Search of Personal Data – Proactive, AI Digital Assistant 

As per previous Post, Google Searches Your Personal Data (Calendar, Gmail, Photos), and Produces Consolidated Results, why can’t the Google Assistant take advantage of the same data sources?

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products.  The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step,  Google’s “Personal”  Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data.  No doubt other Google products are coming soon.

Big benefits are not just for the consumer to  search through their Personal Goggle data, but provide that consolidated view to the AI Assistant.  Does the Google [Digital] Assistant already have access to Google Keep data, for example.  Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage?  If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

Note: The “Google App” for mobile platforms does:

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW).  Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel.  However, the execution of goal based rules would need to be managed.  The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

Bass Fishing Trip
Bass Fishing Trip

For example,

  • The proactive Google Assistant may suggest to book your annual fishing trip soon.  Elevated Interaction to Consumer / User.
  • The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search.   Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category.  Building workflows that are ‘predefined’ for a core set of goals/rules.
  • AI Assistant may search user’s photo archive on the server side.   Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
  • Photos from around ‘August’ may be earmarked for Assistant use
  • Photos may be geo tagged,  e.g. Lake Champlain, which is known for its fishing.
  •  All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository,  goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
  • In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process.  Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” .  Or, food may not be the best place to start, i.e. Amazon Dash

 

Using Google to Search Personal Data: Calendar, Gmail, Photos, and …

On June 16th, 2017,  post reviewed for relevant updates.

Reported by the Verge,  Google adds new Personal tab to search results to show Gmail and Photos content on May 26th.

Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.

I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.

It seems Google is rolling out this capability iteratively,  and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.

Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.

As initially reported by Search Engine Roundtable,  the output of Gmail results appear in a single column text output with links to the content, in this case email.

Google Personal Results
Google Personal Search Results –  Gmail

It appears the sequence of the “Personal Search” output:

  • Agenda (Calendar)
  • Photos
  • Gmail

Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.

Google Personal Search Calendar
Google Personal Search Results –  Calendar

 Group Permissions – Searching

Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly.  At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:

Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.

 

Learn More? Google Help

Click here  to learn more on, “Search results from your Google products”  At this time, according to this Google post:

You can search for information from other Google products like Gmail, Google Calendar, and Google+.


Dear Google [Search]  Product Owner,

I request Google Docs and Google Keep be in the next data sources to be enabled for the Personal search tab.

Best Regards,

Ian

 

Kosher ‘Like’ Certifications and Oversight of Autonomous Vehicle Implementations

Do AI Rules Engines “deliberate” any differently between rules with moral weight over none at all. Rhetorical..?

The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and “business” input. In addition, implementations may vary between manufacturers and countries.

In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role?  Vehicle Insurance companies?  Car insurance will be rewritten when it comes to autonomous cars.  Some cars may have a  higher deductible or the cost of the policy may rise based upon the autonomous implementation.

Conditions Under Consideration:

1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?

1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?

1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?

The more sensors, the more input to the decision process.

Seven Interview Screening Questions for an Agile, Project Manager

It seems like only yesterday I was on the other side of the table, asking interview screening questions to perspective project manager candidates.  Here are seven interview screening questions I was asked earlier this week for an Agile, PM role, and my answers.

Background:

I’d consider myself an Agile Project Manager rather than a Scrum Master.  Differentiation?  I see the Scrum Master role as a coach / facilitator to help the team function using the Agile / Scrum methodologies.    The agile PM role, in my mind, does the coaching/facilitation as well as filling the traditional role as the PM.

Questions:

1.  What is the duration of the Sprint Cycle?

On scrum teams I’ve lead and been apart of in other capacities, its ranged from 1 to 2 weeks, but mostly two week sprints. In one instance, we had two week sprints, and then just after our major release to our client, we set the sprint to one week duration so we could incorporate client feedback ASAP.

2.  What are the various Agile ceremonies you conduct from day one to the last day of the sprint?

Project Kickoff – not necessarily limited to Agile, but is a project ceremony to get the team acquainted with roles and responsibilities, understanding scope at a high level, and the overall project duration expectations.

Initial Combing the Backlog with the Product Owner, and Tech lead(s) to identify priority backlog stories, and technical dependencies for the initial sprint(s), potentially looking ahead to 1+ sprints

Sprint Open #1 (all matrixed team members partake) In this meeting there are a number of activities that may occur:

  • Reviewing the Backlogwith the team in business priority sequence.  Fleshing out the user stories’ definitions, where required, enough to score each story
  • For each User Story in the Backlog prioritized for the current sprint, the team mayperform an efforting exercise to derive the ‘story points’. Playing Planning Poker is one way to derive story point estimates
  • Each of the story point estimates adds up to determine the potential velocity for the sprint, or team output potential
  • User stories assigned to the current Sprint are ‘Accepted’by the team for implementation in the first sprint, and are assigned to team members. e.g. for coding, doc, infra, or additional vetting, such as Architectural Spike stories.
  • Product Owner, Project + Technical Lead(s) decide beforehand how long sprints will take, and roughly thepotential velocity of the team based on all story points in the Sprint.
  • Sprint Open will commence, and any tool used, e.g. JIRA Agile, will enable the Agile PM / Scrum Master toinitiate the Sprint in the SCRUM / Kanban board.  All user stories are set to an initial state, e.g. “To Do”.

Agile Ceremonies Continued…

DSUs, Daily Standups, or Scrum sessions.  Traditionally, 15 minute sessions primarily to uncover BLOCKERS, and help each of the team members to remove their blockers.  Also, discussed, work from prior DSU, and current work until next DSU

(Optional) At the ending of each sprint, a day before Sprint Close, a Retrospective meeting is held, i.e. what did the team do well. what can they do better

Combing of the backlog for the next Sprint with the Product Owner, and Team Lead(s) e.g. re-evaluate priorities; e.g. 2 uncovered additional Stories / Tasks required for Sprint #2

Sprint Close #1 / Sprint Open #2 – Many times Sprint Close, and Sprint Open are combined, or may be separated depending upon the scope of the sprints.  I’ve sat through 4-5-hour Sprint Close sessions.  The Sprint Close may have each of the stories marked as status ‘Done’ reviewed by the team including the Business Product Owner.  A demonstration of the User Story, if applicable, may be performed, e.g. a new button function.  The team demo may occur by anyone on the project team.  The product owner may be required to move the status of the user story to ‘Accepted’ as a final status.  Additionally, burn down charts, and other visual aids may be provided to the team to uncover the team’s projected velocity on par with actual results, and lead to projected effort adjustments.

Sprint Open #2, similar activities to Sprint Open #1.  Team will see what stories they planned to complete, but did not.  Should the team push these stories to the next sprint, or to the backlog for future implementation.

Each sprint in the strictest sense, the content delivered should be ‘deployable’, a commitment to release work into target environments (e.g. Staging, Prod)

3.  When a project starts, how do you figure out the project scope?

Some projects with ‘external’ clients have a clear definition of project scope in the statement of work (SOW).  Other times a Product Owner may have a list of items solicited from product stakeholders.   These are two possible inputs to the ‘Product Backlog’ maintained in any Agile/Scrum facilitation tool, such as JIRA Agile, or Microsoft’s Team Foundation Server (TFS).

Combing the Backlog with the product owner, and tech leads may enable the team to add more details / definition to each of the User Stories in the Backlog.  In some cases, team leads may assign user stories to an Architect or Developer for the purpose of refining scope, and adding ‘sub-tasks’ to the user story.    In addition, some project scope needs to be defined and refined through ‘Architectural Spike Sessions’.

4.  If a Scrum Master is [managing] multiple projects, do they follow the same process for each project?

It helps if a consistent process is followed across scrum projects to eliminate confusion, and potential work across projects.  However, following a consistent process is not required, and there may be business or technical reasons to alter process.

5.  What kind of reports do you create in your Agile projects? Explain the reports.

Burn down chart – line chart representing work left to do vs. time.  Helps to understand if the team will achieve its projected work goals; shows the actual and estimated amount of work to be done

Velocity chart – bar chart (per sprint) showing two grouped bars, one for commitment, and the second for completed.

6. If you have a team resistant to Agile, and are saying there are too many meetings and the process is micro managing the effort, how will you resolve this and convince them to use Agile?

Be on “their” side: “I agree, our daily standups should be all about blockers” How can we remove your blockers inhibiting your work.  “Sprint Open” is a vehicle to clarity on work to be done, and a quick turnaround time during “Sprint Close” are we delivering what the product owner is looking to achieve?  Keeps us focused on what is committed to by the team.

7.  How do you figure out the capacity of a project?

“Capacity of a project” is a ambiguous statement.  If you want to understand what can the team achieve within a given period of time, you establish (sometimes through trial and error) and verify the velocity of the team, how many points they can roughly achieve for a sprint.  Create buckets, or sprints from the backlog work, effort the user stories sprints, and an estimate is derived.  With each sprint, those estimates will be refined with a better understanding of scope and velocity.

Content from this post provided by Ian Roseman, PMP, CSM

Microsoft to Release AI Digital Agent SDK Integration with Visio and Deploy to Bing Search

Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’.  In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.

As a business, Digital Agent subscriber,  Microsoft Bing  search results will contain the business’ AI Digital Assistant created using Visio.  The ‘Chat’ link will invoke the business’ custom Digital Agent.  The Agent has the ability to answer business questions, or lead the user through “complex”, workflows.  For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server.  Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Chat
Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility.  Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

  • The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
  • Microsoft may charge the business a “per chat interaction”  fee model, either per chat, or bundles with discounts based on volume.
  • In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant?  Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge.  The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

  • basic knowledge such as store hours of operation
  • more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”.  Many “how to” articles and videos do exist on the Internet already through blogs or youtube.    The AI digital assistant “curator of knowledge”  may ‘recommended’ existing content, or provide their own content.
  • Proprietary information can be disclosed in a narrative using the AI digital agent, e.g.  My order number is 123456B.  What is the status of my order?
  • Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed.  I would like to recommend her.  E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file.  Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature.  When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.

Intent Recognition: AI Digital Agents’ Best Ways to Interpret User Goals

Goal / Intent recognition may be the most difficult aspect of the AI Digital Agent’s workload, and not Natural language processing (NLP) or Voice Recognition.

Challenges of the Digital Agent
  • Many goals with very similar human utterance / syntax exist.
  • Just like with humans trying to interpret human utterances, many possibilities exist, and misinterpretation occurs.
  • Meeting someone for the first time, without historical context places additional burden on the interpreter of the intent.
  • There are innumerable opportunities to ask the same question, to request information, all achieving a similar, or the same goal.
Opportunities for Goal / Intent Accuracy
  • Business Process Workflows  may enable a very broad ‘category’ of subject matter to be disambiguated as the user traverses the workflow.  The intended goal may be derived from asking ‘narrowing’ questions, until the ‘goal’ is reached, or the user ‘falls out’ of the workflow.
  • Methodologies such as leveraging Regex to interpret utterances are difficult to create and maintain.
  • Utterances are still a necessity, their structure, and correlation to Business Process Workflows.  However, as the knowledge base grows, so does the complexity of curation of the content.   A librarian, or Content Curator may be required to integrate new information, deprecate stale content, and update workflows.
Ongoing, Partnership between Digital Agent and Human
  • Business Process Workflows may be initially designed and implemented by Subject Matter Experts (SMEs).  However, the SMEs might not have predicted all possible valid variations of the workflow, and achieve a different outcome for the triggered goal.
  • As the user traverses a workflow, they may encounter a limiting boundary, such as a Boolean question, which should have more than two options.  Some digital assistants may enable a user to walk on an alternate path by leveraging ‘human assisted’ goal achievement, such as escalation of a chat.  The ‘human assisted’ path may now have a third option, and this new option may be added to the Business Process Workflow for future use.