Tag Archives: Microsoft

Information Architecture: An Afterthought for Content Creation Solutions

Maximizing Digital Asset Reuse

Many applications that enable users to create their own content from word processing to graphics/image creation have typically relied upon 3rd party Content Management Solutions (CMS) / Digital Asset Management (DAM) platforms to collect metadata describing the assets upon ingestion into their platforms.  Many of these platforms have been “stood up” to support projects/teams either for collaboration on an existing project, or reuse of assets for “other” projects.  As a person constantly creating content, where do you “park” your digital resources for archiving and reuse?  Your local drive, cloud storage, or not archived?

Average “Jane” / “Joe” Digital Authors

If I were asked for all the content I’ve created around a particular topic or group of topics from all my collected/ingested digital assets, it may be a herculean search effort spanning multiple platforms.  As an independent creator of content, I may have digital assets ranging from Microsoft Word documents, Google Sheets spreadsheets, Twitter tweets,  Paint.Net (.pdn) Graphics, Blog Posts, etc.

Capturing Content from Microsoft Office Suite Products

Many of the MS Office content creation products such as Microsoft Word have minimal capacity to capture metadata, and if the ability exists, it’s subdued in the application.  MS Word, for example, if a user selects “Save As”, they will be able to add/insert “Authors”, and Tags.  In Microsoft Excel, latest version,  the author of the Workbook has the ability to add Properties, such as Tags, and Categories.  It’s not clear how this data is utilized outside the application, such as the tag data being searchable after uploaded/ingested by OneDrive?

Blog Posts: High Visibility into Categorization and Tagging

A “blogging platform”, such as WordPress, places the Category and Tagging selection fields right justified to the content being posted.  In this UI/UX, it forces a specific mentality to the creation, categorization, and tagging of content.  This blogging structure constantly reminds the author to identify the content so others may identify and consume the content.  Blog post content is created to be consumed by a wide audience of interested viewers based on those tags and categories selected.

Proactive Categorization and Tagging

Perpetuate content classification through drill-down navigation of a derived Information Architecture Taxonomy.  As a “light weight” example, in WordPress, the Tags field when editing a Post, a user starts typing in a few characters, an auto-complete dropdown list appears to the user to select one or more of these previously used tags.  Excellent starting point for other Content Creation Apps.

Users creating Blog Posts can define a Parent/Child hierarchy of categories, and the author may select one or more of relevant categories to be associated with the Post.

Artificial Intelligence (AI) Derived Tags

It wouldn’t be a post without mentioning AI.  Integrated into applications that enable user content creation could be a tool, at a minimum, automatically derives an “Index” of words, or tags.  The way in which this “intelligent index” is derived may be based upon:

  • # of times word occurrence
  • mention of words in a particular context
  • reference of the same word(s) or phrases in other content
    • defined by the same author, and/or across the platform.

This intelligently derived index of data should be made available to any platforms that ingest content from OneDrive, SharePoint, Google Docs, etc.  These DAMs ( or Intelligent Cloud Storage) can leverage this information for any searches across the platforms.

Easy to Retrieve the Desired Content, and Repurpose It

Many Content Creation applications heavily rely on “Recent Accessed Files” within the app.  If the Information Architecture/Taxonomy hierarchy were presented in the “File Open” section, and a user can drill down on select Categories/Subcategories (and/or tags), it might be easier to find the most desired content.

All Eyes on Content Curation: Creation to Archive
  • Content creation products should all focus on the collection of metadata at the time of their creation.
  • Using the Blog Posting methodology, the creation of content should be alongside the metadata tagging
  • Taxonomy (categories, and tags with hierarchy) searches from within the Content Creation applications, and from the Operating System level, the “Original” Digital Asset Management solution (DAM), e.g. MS Windows, Mac

 

Microsoft Productivity Suite – Content Creation, Ingestion, Curation, Search, and Repurpose

Auto Curation: AI Rules Engine Processing

There are, of course, 3rd party platforms that perform very well, are feature rich, and agnostic to all file types.  For example, within a very short period of time, low cost, and possibly a few plugins, a WordPress site can be configured and deployed to suit your needs of Digital Asset Managment (DAM).  The long-term goal is to incorporate techniques such as Auto Curation to any/all files, leveraging an ever-growing intelligent taxonomy, a taxonomy built on user-defined labels/tags, as well an AI rules engine with ML techniques.   OneDrive, as a cloud storage platform, may bridge the gap between JUST cloud storage and a DAM.

Ingestion and Curation Workflow

Content Creation Apps and Auto Curation

  • The ability for Content Creation applications, such as Microsoft Word, to capture not only the user-defined tags but also the context of the tags relating to the content.
    • When ingesting a Microsoft PowerPoint presentation, after consuming the file, and Auto Curation process can extract “reusable components” of the file, such as slide header/name, and the correlated content such as a table, chart, or graphics.
    • Ingesting Microsoft Excel and Auto Curation of Workbooks may yield “reusable components” stored as metadata tags, and their correlated content, such as chart and table names.
    • Ingesting and Auto Curation of Microsoft Word documents may build a classic Index for all the most frequently occurring words, and augment the manually user-defined tags in the file.
    • Ingestion of Photos [and Videos] into and Intelligent Cloud Storage Platform, during the Auto Curation process, may identify commonly identifiable objects, such as trees or people.  These objects would be automatically tagged through the Auto Curation process after Ingestion.
  • Ability to extract the content file metadata, objects and text tags, to be stored in a standard format to be extracted by DAMs, or Intelligent Cloud Storage Platforms with file and metadata search capabilities.  Could OneDrive be that intelligent platform?
  • A user can search for a file title or throughout the Manual and Auto Curated, defined metadata associated with the file.  The DAM or Intelligent Cloud Storage Platform provides both search results.   “Reusable components” of files are also searchable. 
    • For “Reusable Components” to be parsed out of the files to be separate entities, a process needs to occur after Ingestion Auto Curration.
  • Content Creation application, user-entry tag/text fields should have “drop-down” access to the search index populated with auto/manual created tags.

Auto Curation and Intelligent Cloud Storage

  • The intelligence of Auto Curation should be built into the Cloud Storage Platform, e.g. potentially OneDrive.
  • At a minimum, auto curation should update the cloud storage platform indexing engine to correlate files and metadata.
  • Auto Curation is the ‘secret sauce’ that “digests” the content to build the search engine index, which contains identified objects (e.g. tag and text or coordinates)  automatically
    • Auto Curation may leverage a rules engine (AI) and apply user configurable rules such as “keyword density” thresholds
    • Artificial Intelligence, Machine Learning rules may be applied to the content to derive additional labels/tags.
  • If leveraging version control of the intelligent cloud storage platform, each iteration should “re-index” the content, and update the Auto Curation metadata tags.  User-created tags are untouched.
  • If no user-defined labels/tags exist, upon ingestion, the user may be prompted for tags

Auto Curation and “3rd Party” Sources

In the context of sources such as a Twitter feed, there exists no incorporation of feeds into an Intelligent Cloud Storage.  OneDrive, Cloud Intelligent Storage may import feeds from 3rd party sources, and each Tweet would be defined as an object which is searchable along with its metadata (e.g. likes; tags).

Operating System, Intelligent Cloud Storage/DAM

The Intelligent Cloud Storage and DAM solutions should have integrated search capabilities, so on the OS (mobile or desktop) level, the discovery of content through the OS search of tagged metadata is possible.

Current State

  1. OneDrive has no ability to search Microsoft Word tags
  2. The UI for all Productivity Tools must have a comprehensive and simple design for leveraging an existing taxonomy for manual tagging, and the ability to add hints for auto curation
    1. Currently, Microsoft Word has two fields to collect metadata about the file.  It’s obscurely found at the “Save As” dialog.
      1. The “Save As” dialogue box allows a user to add tags and authors but only when using the MS Word desktop version.  The Online (Cloud) version of Word has no such option when saving to Microsoft OneDrive Cloud Storage
  3. Auto Curation (Artificial Intelligence, AI) must inspect the MS Productivity suite tools, and extract tags automatically which does not exist today.
  4. No manual taging or Auto Curation/Facial Recognition exists.

Hostess with the Mostest – Apple Siri, Amazon Alexa, Microsoft Cortana, Google Assistant

Application Integration Opportunities:

  • Microsoft Office, Google G Suite, Apple iWork
    • Advice is integrated within the application, proactive and reactive: When searching in Microsoft Edge, a blinking circle representing Cortana is illuminated.  Cortana says “I’ve collected similar articles on this topic.”  If selected, presents 10 similar results in a right panel to help you find what you need.
  • Personal Data Access and Management
    • The user can vocally access their personal data, and make modifications to that data; E.g. Add entries to their Calendar, and retrieve the current day’s agenda.

Platform Capabilities: Mobile Phone Advantage

Strengthen core telephonic capabilities where competition, Amazon and Microsoft, are relatively week.

  • Ability to record conversations, and push/store content in Cloud, e.g. iCloud.  Cloud Serverless recording mechanism dynamically tags a conversations with “Keywords” creating an Index to the conversation.  Users may search recording, and playback audio clips +/- 10 seconds before and after tagged occurrence.
Calls into the User’s Smartphones May Interact Directly with the Digital Assistant
  • Call Screening – The digital assistant asks for the name of the caller, purpose of the call, and if the matter is “Urgent”
    • A generic “purpose” response, or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
    • The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
    • The smartphone user may decide to accept the call, or reject the call and send the caller to voice mail.
  • A  caller may ask to schedule a meeting with the user, and the digital assistant may access the user’s calendar to determine availability.  The digital assistant may schedule a ‘tentative’ appointment within the user’s calendar.
    • If calendar indicates availability, a ‘tentative’ meeting will be entered. The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
  • If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of “generally available” information, and provide it. The Smartphone user may use applications like Google Keep, and any note tagged with a label “Open Access” may be accessible to any caller.
  • Custom business workflows may be triggered through the smartphone, such as “Pay by Phone”.  When a caller is calling a business user’s smartphone, the call goes to “voice mail” or “digital assistant” based on smartphone user’s configuration.  If the user reaches the “Digital Assistant”, there may be a list of options the user may perform, such as “Request for Service” appointment.  The caller would navigate through a voice recognition, one of many defined by the smartphone users’ workflows.

Platform Capabilities: Mobile Multimedia

Either through your mobile Smartphone, or through a portable speaker with voice recognition (VR).

  • Streaming media / music to portable device based on interactions with Digital Assistant.
  • Menu to navigate relevant (to you) news,  and Digital Assistant to read articles through your portable media device (without UI)

Third Party Partnerships: Adding User Base, and Expanding Capabilities

In the form of platform apps (abstraction), or 3rd party APIs which integrate into the Digital Assistant, allowing users to directly execute application commands, e.g. Play Spotify song, My Way by Frank Sinatra.

  • Any “Skill Set” with specialized knowledge: direct Q&A or instructional guidance  – e.g Home Improvement, Cooking
  • eCommerce Personalized Experience – Amazon
  • Home Automation – doors, thermostats
  • Music – Spotify
  • Navigate Set Top Box (STB) – e.g. find a program to watch
  • Video on Demand (VOD) – e.g. set to record entertainment

 

Apache NiFi on Hortonworks HDF Verses … Microsoft Flow?

Attended a technical discussion last night on Apache NiFi and Hortonworks HDF,  a Meetup @ Honeywell, a Hortonworks client.

Excellent presentations from the Hortonworks team for “NiFi on HDF” solutions architecture and best practices. Powerful solution to process and distribute data in real-time, any data, and in large quantities with resiliency.   It’s no wonder why the US NSA originally developed the ability to consume data in real-time, manipulate it, and then send it on it’s way.  However, recognizing the commercial applications (benevolent wisdom?), the NSA released the product as open-source software, via its technology transfer program.

As a tangent,  among other things, I’m currently exploring the capabilities of “Microsoft Flow“, which has recently been promoted to GA from their ‘Preview Release’.  One resonating question came to mind during the presentations last night:

At it’s peak maturity (not yet), can Microsoft Flow successfully compete with Apache NiFi on Hortonworks HDF?

Discussion Points:

  • The NiFi / HDF solution manages data flows in real-time.  The Microsoft Flow architecture seems to fall short in this capacity. Is it on the product road map for Flow?  Is it a capability Microsoft wants to have?
  • There a bit of architecture / infrastructure on the Hortonworks HDF side, which enables the solution as a whole to be able to ingest, process, and push the data in real-time.   Not sure Microsoft Flow is currently engineered on the back end to handle the throughput.
  • The current Microsoft Flow UI may need to be updated to handle this ‘slightly altered’ paradigm of real-time content consumption and distribution.

The comparison between Microsoft Flow and NiFi on HDF may be a huge stretch for comparison.

Cloud Serverless Computing: Why? and With Whom?

What is Cloud Serverless Computing?

Based on your application Use Case(s), Cloud Serverless Computing architecture may reduce ongoing costs for application usage, and provide scalability on demand without the Cloud Server Instance management overhead, i.e. costs and effort.
Note: Cloud Serverless Computing is used interchangeability with Functions as a service (FaaS) which makes sense from a developer’s standpoint as they are coding Functions (or Methods), and that’s the level of abstraction.

Microsoft Flow

 

Microsoft Flow Pricing

As listed below, there are three tiers, which includes a free tier for personal use or exploring the platform for your business.  The pay Flow plans seem ridiculously inexpensive based on what business workflow designers receive for the 5 USD or 15 USD per month.  Microsoft Flow has abstracted building workflows so almost anyone can build application workflows or automate business manual workflows leveraging almost any of the popular applications on the market.

It doesn’t seem like 3rd party [data] Connectors and Template creators receive any direct monetary value from the Microsoft Flow platform.  Although workflow designers and business owners may be swayed to purchase 3rd party product licenses for the use of their core technology.

Microsoft Flow Pricing
Microsoft Flow Pricing

Microsoft Azure Functions

Process events with a serverless code architecture.  An event-based serverless compute experience to accelerate development. Scale based on demand and pay only for the resources you consume.

Google Cloud  Serverless

Properly designed microservices have a single responsibility and can independently scale. With traditional applications being broken up into 100s of microservices, traditional platform technologies can lead to significant increase in management and infrastructure costs. Google Cloud Platform’s serverless products mitigates these challenges and help you create cost-effective microservices.

Google Serverless Application Development
Google Serverless Application Development

 

Google Serverless Analytics and Machine Learning
Google Serverless Analytics and Machine Learning

 

Google Serverless Use Cases
Google Serverless Use Cases

 

Amazon AWS  Lambda

AWS provides a set of fully managed services that you can use to build and run serverless applications. You use these services to build serverless applications that don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you, allowing you to focus on product innovation and get faster time-to-market. It’s important to note that Amazon was the first contender in this space with a 2014 product launch.

IBM Bluemix OpenWhisk

Execute code on demand in a highly scalable serverless environment.  Create and run event-driven apps that scale on demand.

  • Focus on essential event-driven logic, not on maintaining servers
  • Integrate with a catalog of services
  • Pay for actual usage rather than projected peaks

The OpenWhisk serverless architecture accelerates development as a set of small, distinct, and independent actions. By abstracting away infrastructure, OpenWhisk frees members of small teams to rapidly work on different pieces of code simultaneously, keeping the overall focus on creating user experiences customers want.

What’s Next?

Serverless Computing is a decision that needs to be made based on the usage profile of your application.  For the right use case, serverless computing is an excellent choice that is ready for prime time and can provide significant cost savings.

There’s an excellent article, recently published July 16th, 2017 by  Moshe Kranc called, “Serverless Computing: Ready for Prime Time” which at a high level can help you determine if your application is a candidate for Serverless Computing.


See Also:
  1. “Serverless computing architecture, microservices boost cloud outlook” by Mike Pfeiffer
  2. “What is serverless computing? A primer from the DevOps point of view” by J Steven Perry

Applying Artificial Intelligence & Machine Learning to Data Warehousing

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos.   Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight.  In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls.  Architecture  also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Key Features of iGuard:
  • Policy engine prevents “bad” queries before reaching database
  • Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
  • Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
  • Create policies using policy templates based on SQL Syntax:
    • Require JOIN to Security Table
    • Column Combination Restriction –  Ex. Prevents combining customer name and social security #
    • Table JOIN restriction –  Ex. Prevents joining two different tables in same query
    • Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
    • DDL/DCL restrictions (Create, Alter, Drop, Grant)
    • DQL/DML restrictions (Select, Insert, Update, Delete)
Data Access Policies

Blocks access to sensitive database objects

  • By user or user groups and time of day (shift) (e.g. ETL)
    • Schemas
    • Tables/Views
    • Columns
    • Rows
    • Stored Procs/Functions
    • Packages (Oracle)
Connection Policies

Blocks connections to the database

  • White list or black list by
    • DB User Logins
    • OS User Logins
    • Applications (BI, Query Apps)
    • IP addresses
Rule Templates Contain Customizable Messages

Each of the “Policy Templates”  has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging
iGuard Rules Messaging

 

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics.   The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process.  New rules will be suggested which exceed these defined parameters.  The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

Finally, here is a high level overview of the implementation architecture of iGuard.  For sales or pre-sales technical questions, please contact www.teleran.com

Teleran Logical Architecture
Teleran Logical Architecture

 

Currently Featured Clients
Teleran Featured Clients
Teleran Featured Clients

 

Amazon and Microsoft Drinking their own AI Chatbot Champagne?

A relatively new medium of support for businesses small to global conglomerates becomes available based on the exciting yet  embryonic [Chabot] / Digital Agent services.   Amazon and Microsoft, among others, are diving into this transforming space.  The coat of paint is still wet on Amazon Lex and Microsoft Cortana Skills.   MSFT Cortana Skills Kit is not yet available to any/all developers, but has been opened to a select set of partners, enabling them to expand Cortana’s core knowledge set.  Microsoft’s Bot Framework is in “Preview”  phase.  However, the possibilities are extensive, such as another tier of support for both of these companies, if they turn on their own knowledge repositories using their respective Digital Agents [Chabot]  platforms.

Approach from Inception to Deployment

  • The curation and creation of knowledge content may occur with the definition of ‘Goals/Intents’ and their correlated human utterances which trigger the Goal Question and Answer (Q&A) dialog format.  Classic Use Case.  The question may provide an answer with text, images, and video.
  • Taking Goals/Intents and Utterances to ‘the next level’ involves creating / implementing Process Workflows (PW).    A workflow may contain many possibilities for the user to reach their goal with a single utterance triggered.  Workflows look very similar to what you might see in a Visio diagram, with multiple logical paths. Instead of presenting users with the answer based upon the single human utterance, the question, the workflow navigates the users through a narrative to:
    • disambiguate the initial human utterance, and get a better understanding of the specific user goal/intention.  The user’s question to the Digital Agent may have a degree of ambiguity, and workflows enable the AI Digital Agent to determine the goal through an interactive dialog/inspection.   The larger the volume of knowledge, and the closer the goals/intentions, the implementation would require disambiguation.
    • interactive conversation / dialog with the AI Digital Agent, to walk through a process step by step, including text, images, and Video inline with the conversation.  The AI chat agent may pause the ‘directions’ waiting for the human counterpart to proceed.

Future  Opportunities:

  • Amazon to provide billing and implementation / technical support for AWS services through a customized version of their own AWS Lex service?   All the code used to provide this Digital Agent / Chabot maybe ‘open source’ for those looking to implement similar [enterprise] services.
  • Digital Agent may allow the user to share their screen, OCR the current section of code from an IDE, and perform a code review on the functions / methods.
  • Microsoft has an ‘Online Chat’ capability for MSDN.  Not sure how extensive the capability is, and if its a true 1:1 chat, which they claim is a 24/7 service. Microsoft has libraries of content from Microsoft Docs, MSDN, and TechNet.  If the MSFT Bot framework has the capability to ingest their own articles,  users may be able to trigger these goals/intents from utterances, similar to searching for knowledge base articles today.
  • Abstraction, Abstraction, Abstraction.  These AI Chatbot/Digital Agents must float toward Wizards to build and deploy, and attempt to stay away from coding.  Elevating this technology to be configurable by a business user.  Solutions have significant possibilities for small companies, and this technology needs to reach their hands.  It seems that Amazon Lex is well on their way to achieving the wizard driven creation / distribution, but have ways to go.  I’m not sure if the back end process execution, e.g. Amazon Lambda, will be abstracted any time soon.

Beyond Google Search of Personal Data – Proactive, AI Digital Assistant 

As per previous Post, Google Searches Your Personal Data (Calendar, Gmail, Photos), and Produces Consolidated Results, why can’t the Google Assistant take advantage of the same data sources?

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products.  The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step,  Google’s “Personal”  Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data.  No doubt other Google products are coming soon.

Big benefits are not just for the consumer to  search through their Personal Goggle data, but provide that consolidated view to the AI Assistant.  Does the Google [Digital] Assistant already have access to Google Keep data, for example.  Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage?  If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

Note: The “Google App” for mobile platforms does:

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW).  Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel.  However, the execution of goal based rules would need to be managed.  The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

Bass Fishing Trip
Bass Fishing Trip

For example,

  • The proactive Google Assistant may suggest to book your annual fishing trip soon.  Elevated Interaction to Consumer / User.
  • The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search.   Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category.  Building workflows that are ‘predefined’ for a core set of goals/rules.
  • AI Assistant may search user’s photo archive on the server side.   Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
  • Photos from around ‘August’ may be earmarked for Assistant use
  • Photos may be geo tagged,  e.g. Lake Champlain, which is known for its fishing.
  •  All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository,  goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
  • In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process.  Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” .  Or, food may not be the best place to start, i.e. Amazon Dash

 

Using Google to Search Personal Data: Calendar, Gmail, Photos, and …

On June 16th, 2017,  post reviewed for relevant updates.

Reported by the Verge,  Google adds new Personal tab to search results to show Gmail and Photos content on May 26th.

Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.

I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.

It seems Google is rolling out this capability iteratively,  and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.

Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.

As initially reported by Search Engine Roundtable,  the output of Gmail results appear in a single column text output with links to the content, in this case email.

Google Personal Results
Google Personal Search Results –  Gmail

It appears the sequence of the “Personal Search” output:

  • Agenda (Calendar)
  • Photos
  • Gmail

Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.

Google Personal Search Calendar
Google Personal Search Results –  Calendar

 Group Permissions – Searching

Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly.  At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:

Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.

 

Learn More? Google Help

Click here  to learn more on, “Search results from your Google products”  At this time, according to this Google post:

You can search for information from other Google products like Gmail, Google Calendar, and Google+.


Dear Google [Search]  Product Owner,

I request Google Docs and Google Keep be in the next data sources to be enabled for the Personal search tab.

Best Regards,

Ian

 

Microsoft to Release AI Digital Agent SDK Integration with Visio and Deploy to Bing Search

Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’.  In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.

As a business, Digital Agent subscriber,  Microsoft Bing  search results will contain the business’ AI Digital Assistant created using Visio.  The ‘Chat’ link will invoke the business’ custom Digital Agent.  The Agent has the ability to answer business questions, or lead the user through “complex”, workflows.  For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server.  Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Chat
Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility.  Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

  • The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
  • Microsoft may charge the business a “per chat interaction”  fee model, either per chat, or bundles with discounts based on volume.
  • In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant?  Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge.  The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

  • basic knowledge such as store hours of operation
  • more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”.  Many “how to” articles and videos do exist on the Internet already through blogs or youtube.    The AI digital assistant “curator of knowledge”  may ‘recommended’ existing content, or provide their own content.
  • Proprietary information can be disclosed in a narrative using the AI digital agent, e.g.  My order number is 123456B.  What is the status of my order?
  • Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed.  I would like to recommend her.  E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file.  Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature.  When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.