Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.
I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.
It seems Google is rolling out this capability iteratively, and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.
Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.
As initially reported by Search Engine Roundtable, the output of Gmail results appear in a single column text output with links to the content, in this case email.
It appears the sequence of the “Personal Search” output:
Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.
Group Permissions – Searching
Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly. At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:
Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.
Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’. In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.
As a business, Digital Agent subscriber, Microsoft Bing search results will contain the business’ AI Digital Assistant created using Visio. The ‘Chat’ link will invoke the business’ custom Digital Agent. The Agent has the ability to answer business questions, or lead the user through “complex”, workflows. For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server. Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.
Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility. Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.
The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
Microsoft may charge the business a “per chat interaction” fee model, either per chat, or bundles with discounts based on volume.
In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.
Why not use Microsoft’s Cortana, or Google’s AI Assistant? Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge. The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:
basic knowledge such as store hours of operation
more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”. Many “how to” articles and videos do exist on the Internet already through blogs or youtube. The AI digital assistant “curator of knowledge” may ‘recommended’ existing content, or provide their own content.
Proprietary information can be disclosed in a narrative using the AI digital agent, e.g. My order number is 123456B. What is the status of my order?
Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed. I would like to recommend her. E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file. Frequent patrons may reorder a ‘named’ shopping cart.
Escalation to a human agent is also a feature. When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.
Note: As of yet, Microsoft representatives have made no comment relating to this article.
Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.
Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.
Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.
Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.
Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.
Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors. Workaround, of course, using service accounts with vastly higher folder quota / size.
The AI personal assistant with the “most usage” spanning connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives. An Amazon commercial just aired which depicted a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her. The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.
What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:
Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
Proactive, suggestive actions may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.
Three main sources of AI Personal Assistant value add:
A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ; in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted. In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
An abstraction layer between the user and their apps; The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day rituals.
Proactive and chatty assistants may win the “Assistant of Choice” on all platforms. Being proactive means collecting data more often then when it’s just you asking questions ADHOC. Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location. E.g. “Don’t forget milk” says Siri, as your passing the grocery store. Around the time I leave work Google maps tells me if I have traffic and my ETA.
It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.
Microsoft Corona has an iOS app and widget, which is wrapped around the OS. Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.
Antitrust violation by mobile smartphone maker Apple: iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri, “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.
Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default. Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.
ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution. Has Apple has taken steps to tightly couple Siri with it’s iOS?
AI Personal Assistant ‘Wish” list:
Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions. The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps. After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”. There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders. In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
Capability to interact with IoT using user defined workflows. Hardware and software may exist in the Cloud.
Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.
Apple could already be making the changes as a natural course of their product evolution. Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.
It looks like Microsoft created a generic workflow platform, product independent.
Microsoft has software solutions, like MS Outlook with an [email] rules engine built into Outlook. SharePoint has a workflow solution within the Sharepoint Platform, typically governing the content flowing through it’s system.
Microsoft Flow is a different animal. It seems like Microsoft has built a ‘generic’ rules engine for processing almost any event. The Flow product:
Start using the product from one of two areas: a) “My Flows” where I may view existing and create new [work]flows. b) “Activity”, that shows “Notifications” and “Failures”
Select “My Flows”, and the user may “Create [a workflow] from Blank”, or “Browse Templates”. MSFT existing set of templates were created by Microsoft, and also by a 3rd party implying a marketplace.
Select “Create from Blank” and the user has a single drop down list of events, a culmination events across Internet products. There is an implication there could be any product, and event “made compatible” with MSFT Flows.
The drop down list of events has a format of “Product – Event”. As the list of products and events grow, we should see at least two separate drop down lists, one for products, and a sub list for the product specific events.
Several Example Events Include:
“Dropbox – When a file is created”
“Facebook – When there is a new post to my timeline”
“Project Online – When a new task is created”
“RSS – When a feed item is published”
“Salesforce – When an object is created”
The list of products as well as there events may need a business analyst to rationalize the use cases.
Once an Event is selected, event specific details may be required, e.g. Twitter account details, or OneDrive “watch” folder
Next, a Condition may be added to this [work]flow, and may be specific to the Event type, e.g. OneDrive File Type properties [contains] XYZ value. There is also an “advanced mode” using a conditional scripting language.
There is “IF YES” and “IF NO” logic, which then allows the user to select one [or more] actions to perform
Several Action Examples Include:
“Excel – Insert Rows”
“FTP – Create File”
“Google Drive – List files in folder”
“Mail – Send email”
“Push Notification – Send a push notification”
Again, it seems like an eclectic bunch of Products, Actions, and Events strung together to have a system to POC.
The Templates list, predefined set of workflows that may be of interest to anyone who does not want to start from scratch. The UI provides several ways to filter, list, and search through templates.
Applicable to everyday life, from an individual home user, small business, to the enterprise. At this stage the product seems in Beta at best, or more accurately, just after clickable prototype. I ran into several errors trying to go through basic use cases, i.e. adding rules.
Despite the “Preview” launch, Microsoft has showed us the power in [work]flow processing regardless of the service platform provider, e.g. Box, DropBox, Facebook, GitHub, Instagram, Salesforce, Twitter, Google, MailChimp, …
Microsoft may be the glue to combine service providers who may / expose their services to MSFT Flow functionality.
e.g. Language:Translation; E.g.2. Visual Recognition;
WordPress – Create a Post
New text file dropped in specific folder on Box, DropBox, etc. being ‘monitored’ by MSFT flow [?] Additional code required by user for ‘polling’ capabilities
OR new text file attached, and emailed to specific email account folder ‘watched’ by MSFT Flow.
Event triggers – Automatic read of new text file
stylizing may occur if HTML coding used
Action – Post to a Blog
‘ANY’ Event occurs, a custom message is sent using Skype for a single or group of Skype accounts;
On several ‘eligible’ events, such as “File Creation” into Box, the file (or file shared URL) may be sent to the Skype account.
‘ANY’ Event occurs, a custom mobile text message is sent to a single or group of phone numbers.
Event occurs for “File Creation” e.g. into Box; after passing a “Condition”, actions occur:
IBM Watson Cognitive API, Text to Speech, occurs, and the product of the action is placed in the same Box folder.
Action: Using Microsoft Edge (powered by MSN), in the “My news feed” tab, enable action to publish “Cards”, such as app notifications
Challenges \ Opportunities \ Unknowns
3rd party companies existing, published [cloud; web service] APIs may not even need any modification to integrate with Microsoft Flow; however, business approval may be required to use the API in this manner,
It is unclear re: Flow Templates need to be created by the product owner, e.g. Telestream, or knowledgeable third party, following the Android, iOS, and/or MSFT Mobile Apps model.
It is unclear if the MSFT Flow app may be licensed individually in the cloud, within the 365 cloud suite, or offered for Home and\or Business?
Review of Microsoft OneDrive Cloud Repository. It may be an easy tool and service(s) to save files. If you know what you roughly want to find, most Cloud repositories are easy and straight forward to use. Over time, if not managed appropriately, the cloud repository becomes burdensome to manage, e.g. access and find files. If stuck in the “file folder organization storage” mentality of organizing our content, our Cloud storage solution will become quickly unyielding. Getting into habits like tagging your content should help us to access files beyond the “Folder Borders”. To the contrary, there are huge opportunities to leverage and grow existing platforms, specifically around the process service of [file] Ingestion.
Bulk file loading, e.g. photos from our smartphones, maybe the entire family uploads to the same storage repository
If performed by the “Ingestion Service”, manual user “tagging” of a group of photos, or individual images may be available.
Geotagging may be available either at the time of image capture , or upon the start of the “Ingestion Service”
Facial Recognition, compared to the likes of services such as Facebook, based on my experience, are not readily available to personal Cloud Storage repositories.
Auto tagging pictures upon ingestion, if performed, may leverage “Extracted Text” from images. Images become searchable with little human intervention.
Cloud File Repository: Storing Content
I created modified existing Microsoft Office files”tags”, in this case MS Word and PowerPoint file types were used. I opened the Word file, and selected “File” menu, “Save As” menu, then “More Options” under the list of file types. I was then presented with the classic “Save As” form. Just below the “Save as type” list box, there were 3 “metadata” fields to describe the file:
The first two fields are semi colon ; delimited and multiple values are allowed. In this test case, I added to the “Tags” field “CV;resume;career”. I then used the MS Windows Snipping Tool that comes with the OS to document the step. I called the file MSWordTags.PNG and saved this screen capture to my OneDrive. Then I saved the document itself on my OneDrive.
Cloud File Repository: Finding Content
I then started up Internet Explorer, and went to the https://onedrive.live.com site to access my cloud content. On the top left corner of the screen, there is a field called “Search Everything”, and I typed in CV.
The search results included ONLY the image screenshot file that contained the letters CV, and not the MS Word file that explicitly had the Tag field with the text value CV.
Looking at the file properties as defined by OneDrive, there was ALSO a field called “Tags” with no values populated. For example, the Cloud “Ingestion” service did not read the file for metadata, and abstract it to the Cloud level. just two separate sets of metadata describing the same file. To view the Cloud file data, select the file, and there is an i with a circle around it. Too many ways to store the same data, and may lead to inconsistent data.
For the Cloud file information / properties, the image file had a field called “Extracted Text”, and this is how the search picked up the CV value in the Cloud Search for my files with the “CV” tag.
Oddly, the MS Word file attributes in OneDrive did not offer “tags” as a field to store meta data in the cloud. The “tags” field was available when looking at the PNG file. However, the user may add a “Description” in a multiline text field. Tags metadata on images and not MS Word files? Odd.
Future State (?): If the Cloud Ingestion process can perform an “Extracted Text” process, it may also have other “Ingestion services”, such as “Facial Recognition” from “known good” faces already tagged. e.g. I tag a face from within the OneDrive browser UI, and now when other images are ingested, there can be a correlation between the files.
As a business model, are we going to add a tier just after Cloud File ingestion, maybe exercise a third party suite of cognitive APIs, such as facial recognition? For example, Microsoft OneDrive Ingests a file, and if it’s an image file, routes through to the appropriate IBM Watson API, processes the file, and returns [updated] metadata, and a modified file? Maybe.
Update: Auto Tagging Objects Upon Ingestion
On an image with no tags, I selected the “Edit tags” menu from the Properties pane on the right side of the screen. As a scrolling menu, the option to “Add existing tag” appeared. There were dozens of tags already created with a word, thumbnail image, and the number of times used. Wow. Awesome. The current implementation seems to automatically, upon ingestion, identify objects in the image, and tag the images with those objects, e.g. Building, Beach, Horse, etc.
Presumption that Microsoft OneDrive performs object recognition on images upon file ingestion into the cloud (as opposed to in the Photos app).
“Extracted Text ” Metadata Field from within Microsoft OneDrive Image PNG File Properties:
Presumption that Microsoft OneDrive performs OCR on images upon file ingestion into the cloud (as opposed to the Photos app).
“Just give us the cliff notes.”
“Please give me the bird’s eye view.”
AI Email Thread Abstraction and Summarization
A daunting, and highly public email has landed in your lap..top to respond. The email thread goes between over a dozen people all across the globe. All of the people on the TO list, and some on the CC list, have expressed their points about … something. There are junior technical and very senior business staff on the email. I’ll need to understand the email thread content from the perspective of each person that replied to the thread. That may involve sifting through each of the emails on the thread. Even though the people on the emails are English fluent, their response styles may be different based on culture, or seniority of staff (e.g. abstractly written). Also, the technical folks might want to keep the conversation of the email granular and succinct.
Let’s throw a bit of [AI] automation at this problem.
Another step in our AI personal assistant evolution, email thread aggregation and summarization utilizing cognitive APIs | tools such as what IBM Watson has implemented with their Language APIs. Based on the documentation provided by their APIs, the above challenges can be resolved for the reader. A suggestion to an IBM partner for the Watson Cognitive cloud, build an ’email plugin’ if the email product exposes their solution to customization.
A plugin built on top of an email application, flexible enough to allow customization, may be a candidate for Email Thread aggregation and summarization. Email clients may include IBM Notes, Gmail, (Apple) Mail, Microsoft Outlook, Yahoo! Mail, and OpenText FirstClass.
Add this capability to the job description of AI assistants, such as Cortana, Echo, Siri, and Google Now. In fact, this plug-in may not need the connectivity and usage of an AI assistant, just the email plug-in interacting with a suite of cognitive cloud API calls.
AI Document Abstraction and Summarization
A plug in may also be created for word processors such as Microsoft Word. Once activated within a document, a summary page may be created and prefixed to the existing document. There are several use cases, such as a synopsis of the document.
With minimal effort from human input, marking up the content, we would still be able to derive the contextual metadata, and leverage it to create new sentences, paragraphs of sentences.
I’ve not seen an AI Outlook integration in the list of MS Outlook Add-ins that would bring this functionality to users.
“…companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art [AI] forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.”
This article represents a true picture of where we are today for the average consumer and producer of information, and the companies that repurpose information, e.g. in the form of advertisements.
The advancement and current progress of Artificial Intelligence, Machine Learning, analogously paints a picture akin to the 1970s with computers that fill rooms, and accept punch cards as input.
Today’s consumers have mobile computing power that is on par to the whole rooms of the 1970s; however, “more compute power” in a tinier package may not be the path to AI sentience. How AI algorithm models are computed might need to take an alternate approach.
In a classical computation system, a bit would have to be in one state or the other. However quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
The construction, and validation of Artificial Intelligence, Machine Learning, algorithm models should be engineered on a Quantum Computing framework.
Is there value in providing users the ability to apply “Time Lock Access” to files in cloud storage? Files are securely uploaded by their Owner. After upload no one, including the Owner, may access / open the file(s). Only after the date and time provided for the time lock passes, files will be available for access, and action may be taken, e.g. Automatically email a link to the files. More complex actions may be attached to the time lock release such as script execution using a simple set of rules as defined by the file Owner.
Solution already exists? Please send me a link to the cloud integration product / plug in.