Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.
Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.
Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.
Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.
Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.
Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors. Workaround, of course, using service accounts with vastly higher folder quota / size.
The AI personal assistant with the “most usage” spanning connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives. An Amazon commercial just aired which depicted a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her. The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.
What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:
Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
Proactive, suggestive actions may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.
Three main sources of AI Personal Assistant value add:
A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ; in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted. In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
An abstraction layer between the user and their apps; The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day rituals.
Proactive and chatty assistants may win the “Assistant of Choice” on all platforms. Being proactive means collecting data more often then when it’s just you asking questions ADHOC. Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location. E.g. “Don’t forget milk” says Siri, as your passing the grocery store. Around the time I leave work Google maps tells me if I have traffic and my ETA.
It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.
Microsoft Corona has an iOS app and widget, which is wrapped around the OS. Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.
Antitrust violation by mobile smartphone maker Apple: iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri, “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.
Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default. Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.
ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution. Has Apple has taken steps to tightly couple Siri with it’s iOS?
AI Personal Assistant ‘Wish” list:
Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions. The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps. After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”. There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders. In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
Capability to interact with IoT using user defined workflows. Hardware and software may exist in the Cloud.
Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.
Apple could already be making the changes as a natural course of their product evolution. Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.
It looks like Microsoft created a generic workflow platform, product independent.
Microsoft has software solutions, like MS Outlook with an [email] rules engine built into Outlook. SharePoint has a workflow solution within the Sharepoint Platform, typically governing the content flowing through it’s system.
Microsoft Flow is a different animal. It seems like Microsoft has built a ‘generic’ rules engine for processing almost any event. The Flow product:
Start using the product from one of two areas: a) “My Flows” where I may view existing and create new [work]flows. b) “Activity”, that shows “Notifications” and “Failures”
Select “My Flows”, and the user may “Create [a workflow] from Blank”, or “Browse Templates”. MSFT existing set of templates were created by Microsoft, and also by a 3rd party implying a marketplace.
Select “Create from Blank” and the user has a single drop down list of events, a culmination events across Internet products. There is an implication there could be any product, and event “made compatible” with MSFT Flows.
The drop down list of events has a format of “Product – Event”. As the list of products and events grow, we should see at least two separate drop down lists, one for products, and a sub list for the product specific events.
Several Example Events Include:
“Dropbox – When a file is created”
“Facebook – When there is a new post to my timeline”
“Project Online – When a new task is created”
“RSS – When a feed item is published”
“Salesforce – When an object is created”
The list of products as well as there events may need a business analyst to rationalize the use cases.
Once an Event is selected, event specific details may be required, e.g. Twitter account details, or OneDrive “watch” folder
Next, a Condition may be added to this [work]flow, and may be specific to the Event type, e.g. OneDrive File Type properties [contains] XYZ value. There is also an “advanced mode” using a conditional scripting language.
There is “IF YES” and “IF NO” logic, which then allows the user to select one [or more] actions to perform
Several Action Examples Include:
“Excel – Insert Rows”
“FTP – Create File”
“Google Drive – List files in folder”
“Mail – Send email”
“Push Notification – Send a push notification”
Again, it seems like an eclectic bunch of Products, Actions, and Events strung together to have a system to POC.
The Templates list, predefined set of workflows that may be of interest to anyone who does not want to start from scratch. The UI provides several ways to filter, list, and search through templates.
Applicable to everyday life, from an individual home user, small business, to the enterprise. At this stage the product seems in Beta at best, or more accurately, just after clickable prototype. I ran into several errors trying to go through basic use cases, i.e. adding rules.
Despite the “Preview” launch, Microsoft has showed us the power in [work]flow processing regardless of the service platform provider, e.g. Box, DropBox, Facebook, GitHub, Instagram, Salesforce, Twitter, Google, MailChimp, …
Microsoft may be the glue to combine service providers who may / expose their services to MSFT Flow functionality.
e.g. Language:Translation; E.g.2. Visual Recognition;
WordPress – Create a Post
New text file dropped in specific folder on Box, DropBox, etc. being ‘monitored’ by MSFT flow [?] Additional code required by user for ‘polling’ capabilities
OR new text file attached, and emailed to specific email account folder ‘watched’ by MSFT Flow.
Event triggers – Automatic read of new text file
stylizing may occur if HTML coding used
Action – Post to a Blog
‘ANY’ Event occurs, a custom message is sent using Skype for a single or group of Skype accounts;
On several ‘eligible’ events, such as “File Creation” into Box, the file (or file shared URL) may be sent to the Skype account.
‘ANY’ Event occurs, a custom mobile text message is sent to a single or group of phone numbers.
Event occurs for “File Creation” e.g. into Box; after passing a “Condition”, actions occur:
IBM Watson Cognitive API, Text to Speech, occurs, and the product of the action is placed in the same Box folder.
Action: Using Microsoft Edge (powered by MSN), in the “My news feed” tab, enable action to publish “Cards”, such as app notifications
Challenges \ Opportunities \ Unknowns
3rd party companies existing, published [cloud; web service] APIs may not even need any modification to integrate with Microsoft Flow; however, business approval may be required to use the API in this manner,
It is unclear re: Flow Templates need to be created by the product owner, e.g. Telestream, or knowledgeable third party, following the Android, iOS, and/or MSFT Mobile Apps model.
It is unclear if the MSFT Flow app may be licensed individually in the cloud, within the 365 cloud suite, or offered for Home and\or Business?
Review of Microsoft OneDrive Cloud Repository. It may be an easy tool and service(s) to save files. If you know what you roughly want to find, most Cloud repositories are easy and straight forward to use. Over time, if not managed appropriately, the cloud repository becomes burdensome to manage, e.g. access and find files. If stuck in the “file folder organization storage” mentality of organizing our content, our Cloud storage solution will become quickly unyielding. Getting into habits like tagging your content should help us to access files beyond the “Folder Borders”. To the contrary, there are huge opportunities to leverage and grow existing platforms, specifically around the process service of [file] Ingestion.
Bulk file loading, e.g. photos from our smartphones, maybe the entire family uploads to the same storage repository
If performed by the “Ingestion Service”, manual user “tagging” of a group of photos, or individual images may be available.
Geotagging may be available either at the time of image capture , or upon the start of the “Ingestion Service”
Facial Recognition, compared to the likes of services such as Facebook, based on my experience, are not readily available to personal Cloud Storage repositories.
Auto tagging pictures upon ingestion, if performed, may leverage “Extracted Text” from images. Images become searchable with little human intervention.
Cloud File Repository: Storing Content
I created modified existing Microsoft Office files”tags”, in this case MS Word and PowerPoint file types were used. I opened the Word file, and selected “File” menu, “Save As” menu, then “More Options” under the list of file types. I was then presented with the classic “Save As” form. Just below the “Save as type” list box, there were 3 “metadata” fields to describe the file:
The first two fields are semi colon ; delimited and multiple values are allowed. In this test case, I added to the “Tags” field “CV;resume;career”. I then used the MS Windows Snipping Tool that comes with the OS to document the step. I called the file MSWordTags.PNG and saved this screen capture to my OneDrive. Then I saved the document itself on my OneDrive.
Cloud File Repository: Finding Content
I then started up Internet Explorer, and went to the https://onedrive.live.com site to access my cloud content. On the top left corner of the screen, there is a field called “Search Everything”, and I typed in CV.
The search results included ONLY the image screenshot file that contained the letters CV, and not the MS Word file that explicitly had the Tag field with the text value CV.
Looking at the file properties as defined by OneDrive, there was ALSO a field called “Tags” with no values populated. For example, the Cloud “Ingestion” service did not read the file for metadata, and abstract it to the Cloud level. just two separate sets of metadata describing the same file. To view the Cloud file data, select the file, and there is an i with a circle around it. Too many ways to store the same data, and may lead to inconsistent data.
For the Cloud file information / properties, the image file had a field called “Extracted Text”, and this is how the search picked up the CV value in the Cloud Search for my files with the “CV” tag.
Oddly, the MS Word file attributes in OneDrive did not offer “tags” as a field to store meta data in the cloud. The “tags” field was available when looking at the PNG file. However, the user may add a “Description” in a multiline text field. Tags metadata on images and not MS Word files? Odd.
Future State (?): If the Cloud Ingestion process can perform an “Extracted Text” process, it may also have other “Ingestion services”, such as “Facial Recognition” from “known good” faces already tagged. e.g. I tag a face from within the OneDrive browser UI, and now when other images are ingested, there can be a correlation between the files.
As a business model, are we going to add a tier just after Cloud File ingestion, maybe exercise a third party suite of cognitive APIs, such as facial recognition? For example, Microsoft OneDrive Ingests a file, and if it’s an image file, routes through to the appropriate IBM Watson API, processes the file, and returns [updated] metadata, and a modified file? Maybe.
Update: Auto Tagging Objects Upon Ingestion
On an image with no tags, I selected the “Edit tags” menu from the Properties pane on the right side of the screen. As a scrolling menu, the option to “Add existing tag” appeared. There were dozens of tags already created with a word, thumbnail image, and the number of times used. Wow. Awesome. The current implementation seems to automatically, upon ingestion, identify objects in the image, and tag the images with those objects, e.g. Building, Beach, Horse, etc.
Presumption that Microsoft OneDrive performs object recognition on images upon file ingestion into the cloud (as opposed to in the Photos app).
“Extracted Text ” Metadata Field from within Microsoft OneDrive Image PNG File Properties:
Presumption that Microsoft OneDrive performs OCR on images upon file ingestion into the cloud (as opposed to the Photos app).
“Just give us the cliff notes.”
“Please give me the bird’s eye view.”
AI Email Thread Abstraction and Summarization
A daunting, and highly public email has landed in your lap..top to respond. The email thread goes between over a dozen people all across the globe. All of the people on the TO list, and some on the CC list, have expressed their points about … something. There are junior technical and very senior business staff on the email. I’ll need to understand the email thread content from the perspective of each person that replied to the thread. That may involve sifting through each of the emails on the thread. Even though the people on the emails are English fluent, their response styles may be different based on culture, or seniority of staff (e.g. abstractly written). Also, the technical folks might want to keep the conversation of the email granular and succinct.
Let’s throw a bit of [AI] automation at this problem.
Another step in our AI personal assistant evolution, email thread aggregation and summarization utilizing cognitive APIs | tools such as what IBM Watson has implemented with their Language APIs. Based on the documentation provided by their APIs, the above challenges can be resolved for the reader. A suggestion to an IBM partner for the Watson Cognitive cloud, build an ’email plugin’ if the email product exposes their solution to customization.
A plugin built on top of an email application, flexible enough to allow customization, may be a candidate for Email Thread aggregation and summarization. Email clients may include IBM Notes, Gmail, (Apple) Mail, Microsoft Outlook, Yahoo! Mail, and OpenText FirstClass.
Add this capability to the job description of AI assistants, such as Cortana, Echo, Siri, and Google Now. In fact, this plug-in may not need the connectivity and usage of an AI assistant, just the email plug-in interacting with a suite of cognitive cloud API calls.
AI Document Abstraction and Summarization
A plug in may also be created for word processors such as Microsoft Word. Once activated within a document, a summary page may be created and prefixed to the existing document. There are several use cases, such as a synopsis of the document.
With minimal effort from human input, marking up the content, we would still be able to derive the contextual metadata, and leverage it to create new sentences, paragraphs of sentences.
I’ve not seen an AI Outlook integration in the list of MS Outlook Add-ins that would bring this functionality to users.
“…companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art [AI] forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.”
This article represents a true picture of where we are today for the average consumer and producer of information, and the companies that repurpose information, e.g. in the form of advertisements.
The advancement and current progress of Artificial Intelligence, Machine Learning, analogously paints a picture akin to the 1970s with computers that fill rooms, and accept punch cards as input.
Today’s consumers have mobile computing power that is on par to the whole rooms of the 1970s; however, “more compute power” in a tinier package may not be the path to AI sentience. How AI algorithm models are computed might need to take an alternate approach.
In a classical computation system, a bit would have to be in one state or the other. However quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
The construction, and validation of Artificial Intelligence, Machine Learning, algorithm models should be engineered on a Quantum Computing framework.
Is there value in providing users the ability to apply “Time Lock Encryption” to files in cloud storage? Files are securely uploaded by their Owner. After upload no one, including the Owner, may decrypt and access / open the file(s). Only after the date and time provided for the time lock passes, files will be decrypted, and optionally an action may be taken, e.g. Email a link to the decrypted files to a DL, or a specific person.
Additionally, files might only be decrypted ‘just in time’ and only for the specific recipients who had received the link. More complex actions may be attached to the time lock release such as script execution using a simple set of rules as defined by the file Owner.
The encryption should be the highest available as defined by the regional law in which the files reside. Note: issue with cloud storage and applicable regional laws, I.e. In the cloud.
Already exists as a 3rd party plugin to an existing cloud solution?Please send me a link to the cloud integration product / plug in.
Artificial Intelligent (AI) “Assistants”, or “Bots” are taken to the ‘next level’ when the assistant becomes a proactive entitybased on the input from human intelligent experts that grows with machine learning.
Even the implication of an ‘Assistant’ v. ‘Life Partner’ implies a greater degree of dynamic, and proactive interaction. The cross over to becoming ‘Life Partner’ is when we go ‘above and beyond’ to help our partners succeed, or even survive the day to day.
Once we experience our current [digital, mobile] ‘assistants’ positively influencing our lives in a more intelligent, proactive manner, an emotional bond ‘grows’, and the investment in this technology will also expand.
Practical Applications Range:
Alcoholics Anonymous Coach , Mentor – enabling the human partner to overcome temporary weakness. Knowledge, and “triggers” need to be incorporated into the AI ‘Partner’; “Location / Proximity” reminder if person enters a shopping area that has a liquor store. [AI] “Partner” help “talk down”
Understanding ‘data points’ from multiple sources, such as alarms, and calendar events, to derive ‘knowledge’, and create an actionable trigger.
e.g. “Did you remember to take your medicine?” unprompted; “There is a new article in N periodical, that pertains to your medicine. Would you like to read it?”
e.g. 2 unprompted, “Weather calls for N inches of Snow. Did you remember to service your Snow Blower this season?”
FinTech – while in department store XYZ looking to purchase Y over a certain amount, unprompted “Your credit score indicates you are ‘most likely’ eligible to ‘sign up’ for a store credit card, and get N percentage off your first purchase” Multiple input sources used to achieve a potential sales opportunity.
IBM has a cognitive cloud of AI solutions leveraging IBM’s Watson. Most/All of the 18 web applications they have hosted (with source) are driven by human interactive triggers, as with the “Natural Language Classifier”, which helps build a question-and-answer repository.
There are four bits that need to occur to accelerate adoption of the ‘AI Life Partner’:
Knowledge Experts, or Subject Matter Experts (SME) need to be able to “pass on” their knowledge to build repositories. IBM Watson Natural Language Classifier may be used.
The integration of this knowledge into an AI medium, such as a ‘Digital Assistant’ needs to occur with corresponding ‘triggers’
Our current AI ‘Assistants’ need to become [more] proactive as they integrate into our ‘digital’ lives, such as going beyond the setting of an alarm clock, hands free calling, or checking the sports score. Our [AI] “Life Partner” needs to ‘act’ like buddy and fan of ‘our’ sports team. Without prompting, proactively serve up knowledge [based on correlated, multiple sources], and/or take [acceptable] actions.
E.g. FinTech – “Ourschedule is open tonight, and there are great seats available, Section N, Seat A for ABC dollars on Stubhub. Shall I make the purchase?”
Partner with vendors to drive FinTech business rules.
Take ‘advantage’ of more knowledge sources, such as the applications we use that collect our data. Use multiple knowledge sources in concert, enabling the AI to correlate data and propose ‘complex’ rules of interaction.
Our AI ‘Life Partners’ may grow in knowledge, and mature the relationship between man and machine. Incorporating derived rules leveraging machine learning, without input of a human expert, will come with risk and reward.
Throughout my career, I’ve worked with several financial services teams to engineer, test, and deploy solutions. Here is a brief list of the FinTech solutions I helped construct, test, and deploy:
3K Global Investment Bankers – proprietary CRM platform, including Business Analytics, Business Objects Universe.
Equity Research platform, crafted based on business expertise.
Custom UI for research analysts, enabled the analysts to create their research, and push into the workflow.
Based on a set of rules, ‘locked down’ part of the report would “Build Discloses” , e.g. analyst holds 10% of co.
Custom Documentum workflow would route research to the distribution channels; or direct research to legal review.
(Multiple Financial Org.) Data Warehouse middleware solutions to assist organizations in managing, and monitoring usage of their DW.
Global Derivatives firm, migration of mainframe system to C# client / Server platform
Investment Bankers and Equity Capital Markets (ECMG) build trading platform so teams may collaborate on Deals/Trades.
Global Asset Management Firm: On boarding and Fund management solutions, custom UI and workflows in SharePoint
A “Transaction Management Solution” targets a mixture of FinTech services, primarily “Payments” Processing.
Target State Capabilities of a Transaction Management Solution:
Fraud Detection: The ability to identify and prevent fraud exists within many levels of the transaction from facilitators of EFT to credit monitoring and scoring agencies. Every touch point of a transaction has its own perspective of possible fraud, and must be evaluated to the extent it can be.
Business experts (SMEs) and technologists continue to expand the practical applications of Artificial Intelligence (AI) every day. Although extensive AI fraud detection applications exists today incorporating human populated Rules Engines, and AI Machine learning (independent rule creation).
Consumer “Financial Insurance” Products
Observing a business, end to end transaction may provide visibility into areas of transaction risk. Process and/or technology may be adopted / augmented to minimize the risk.
E.g. eBay auction process has a risk regarding the changing hands of currency and merchandise. A “delayed payment”, holding funds until the merchandise has been exchanged minimized the risk, implemented using PayPal.
In product lifecycle of Discovery, Development, and Delivery phases, converting concept to product.
For quite some time companies have attempted to tread in this space with mixed results, either through acquisition or build out of their existing platforms. There seems to be significant opportunities within the services, software and infrastructure areas. It will be interesting to see how it all plays out.
Inhibitors to enclosing a transaction within an end to end Transaction Management Solutions (TMS):
Higher level of risk (e.g. business, regulatory) expanding out service offerings
Stretching too thin, beyond core vision, and lose sight of vision.
Transforming tech company to hybrid financial services
Automation, streamlining of processes, may derive efficiencies may lead to reduction in staff / workforce
Multiple platforms performing functions provides redundant capabilities, reduced risk, and more consumer choices