Although I’ve been a huge fan of PlanningPoker.com since 2011, my Scrum Product team consisted of more than five members, and their Free Membership allows up to 5 users. The team I was working with had just started their agile transformation and was trying out aspects of Agile / Scrum they wanted to adopt. They weren’t about to make the investment in Planning Poker for estimations quite yet, so I stumbled across an estimation tool as a free add-on to Azure DevOps.
Microsoft’s Azure DevOps solution is both a code and requirements repository in one. Requirements are managed from an Agile perspective, through a Product Backlog of user stories. The user story backlog item type contains a field called “Story Points”, or sometimes configured as “Effort”.
Ground Rules – 50k Overview
All team members select from a predetermined relative effort scale, such as Tee Shirt Sizes (XS, S, M, L, XL) or Fibonacci sequence (0, 1/2, 1, 2, 3, 5, 8, 13, 21, 34…) All selections of team members are hidden until the facilitator decides to expose/flip all team selections at once. Flipping at once should help to remove natural biases, such as selecting the same value as the team tech lead’s selection. After that, there’s a team discussion to normalize the value into an agreed selection, such as the average value.
Integration with Azure DevOps
The interesting thing about this estimation tool is you can explicitly select stories to perform the effort estimation process right from the backlog, and in turn, once the team agrees upon a value, it can be committed to the User Story in the Backlog. No jumping between user stories, updating and saving field values. All performed from the effort estimation tool.
Voice mail is so LAST Century. It’s a static communications interface to address your incoming phone calls. It’s a dinosaur in terms of communications protocol. Yes, a digital assistant, or chat bots should “field” your incoming calls, providing your callers a higher level of service.
Business or Personal?
Why not both? There are use cases which highlight the value of a Digital Assistant answering your phone calls when you’re unavailable.
Trusted Friends and Business Pins
Level of available services may change based upon the level of trusted access, such as:
Friends Seeking Your Availability for a Hockey Game Next Week
Business Partners Sharing Information access such as invoices
Untrusted Caller Access
The Vetting of Unsolicited Calls, such as robocalls
Defining and Default Dialogs
Users can define dialogs through drop and drag workflow diagram tools making it easy to “build” conversations / dialogs flows. In addition, out of the box flows can provide administrators with opportunities and discover the ways in which AI digital assistant may be leveraged.
Canned / Default dialog templates to handle the most common dialogs / workflows will empower users to the implement rapidly.
Any Acquisitions in the Pipeline?
Are the big names in the Digital Assistant space looking to partner or acquire tools that can easily transform workflows to be leveraged by digital assistant?
Are the components available for third party product companies to extend the Mobile OS capabilities as of now? Or are the mobile OS companies the only ones in a possession of performing these upgrades?
Many applications that enable users to create their own content from word processing to graphics/image creation have typically relied upon 3rd party Content Management Solutions (CMS) / Digital Asset Management (DAM) platforms to collect metadata describing the assets upon ingestion into their platforms. Many of these platforms have been “stood up” to support projects/teams either for collaboration on an existing project, or reuse of assets for “other” projects. As a person constantly creating content, where do you “park” your digital resources for archiving and reuse? Your local drive, cloud storage, or not archived?
Average “Jane” / “Joe” Digital Authors
If I were asked for all the content I’ve created around a particular topic or group of topics from all my collected/ingested digital assets, it may be a herculean search effort spanning multiple platforms. As an independent creator of content, I may have digital assets ranging from Microsoft Word documents, Google Sheets spreadsheets, Twitter tweets, Paint.Net (.pdn) Graphics, Blog Posts, etc.
Capturing Content from Microsoft Office Suite Products
Many of the MS Office content creation products such as Microsoft Word have minimal capacity to capture metadata, and if the ability exists, it’s subdued in the application. MS Word, for example, if a user selects “Save As”, they will be able to add/insert “Authors”, and Tags. In Microsoft Excel, latest version, the author of the Workbook has the ability to add Properties, such as Tags, and Categories. It’s not clear how this data is utilized outside the application, such as the tag data being searchable after uploaded/ingested by OneDrive?
Blog Posts: High Visibility into Categorization and Tagging
A “blogging platform”, such as WordPress, places the Category and Tagging selection fields right justified to the content being posted. In this UI/UX, it forces a specific mentality to the creation, categorization, and tagging of content. This blogging structure constantly reminds the author to identify the content so others may identify and consume the content. Blog post content is created to be consumed by a wide audience of interested viewers based on those tags and categories selected.
Proactive Categorization and Tagging
Perpetuate content classification through drill-down navigation of a derived Information Architecture Taxonomy. As a “light weight” example, in WordPress, the Tags field when editing a Post, a user starts typing in a few characters, an auto-complete dropdown list appears to the user to select one or more of these previously used tags. Excellent starting point for other Content Creation Apps.
Users creating Blog Posts can define a Parent/Child hierarchy of categories, and the author may select one or more of relevant categories to be associated with the Post.
Artificial Intelligence (AI) Derived Tags
It wouldn’t be a post without mentioning AI. Integrated into applications that enable user content creation could be a tool, at a minimum, automatically derives an “Index” of words, or tags. The way in which this “intelligent index” is derived may be based upon:
# of times word occurrence
mention of words in a particular context
reference of the same word(s) or phrases in other content
defined by the same author, and/or across the platform.
This intelligently derived index of data should be made available to any platforms that ingest content from OneDrive, SharePoint, Google Docs, etc. These DAMs ( or Intelligent Cloud Storage) can leverage this information for any searches across the platforms.
Easy to Retrieve the Desired Content, and Repurpose It
Many Content Creation applications heavily rely on “Recent Accessed Files” within the app. If the Information Architecture/Taxonomy hierarchy were presented in the “File Open” section, and a user can drill down on select Categories/Subcategories (and/or tags), it might be easier to find the most desired content.
All Eyes on Content Curation: Creation to Archive
Content creation products should all focus on the collection of metadata at the time of their creation.
Using the Blog Posting methodology, the creation of content should be alongside the metadata tagging
Taxonomy (categories, and tags with hierarchy) searches from within the Content Creation applications, and from the Operating System level, the “Original” Digital Asset Management solution (DAM), e.g. MS Windows, Mac
There are, of course, 3rd party platforms that perform very well, are feature rich, and agnostic to all file types. For example, within a very short period of time, low cost, and possibly a few plugins, a WordPress site can be configured and deployed to suit your needs of Digital Asset Managment (DAM). The long-term goal is to incorporate techniques such as Auto Curation to any/all files, leveraging an ever-growing intelligent taxonomy, a taxonomy built on user-defined labels/tags, as well an AI rules engine with ML techniques. OneDrive, as a cloud storage platform, may bridge the gap between JUST cloud storage and a DAM.
Content Creation Apps and Auto Curation
The ability for Content Creation applications, such as Microsoft Word, to capture not only the user-defined tags but also the context of the tags relating to the content.
When ingesting a Microsoft PowerPoint presentation, after consuming the file, and Auto Curation process can extract “reusable components” of the file, such as slide header/name, and the correlated content such as a table, chart, or graphics.
Ingesting Microsoft Excel and Auto Curation of Workbooks may yield “reusable components” stored as metadata tags, and their correlated content, such as chart and table names.
Ingesting and Auto Curation of Microsoft Word documents may build a classic Index for all the most frequently occurring words, and augment the manually user-defined tags in the file.
Ingestion of Photos [and Videos] into and Intelligent Cloud Storage Platform, during the Auto Curation process, may identify commonly identifiable objects, such as trees or people. These objects would be automatically tagged through the Auto Curation process after Ingestion.
Ability to extract the content file metadata, objects and text tags, to be stored in a standard format to be extracted by DAMs, or Intelligent Cloud Storage Platforms with file and metadata search capabilities. Could OneDrive be that intelligent platform?
A user can search for a file title or throughout the Manual and Auto Curated, defined metadata associated with the file. The DAM or Intelligent Cloud Storage Platform provides both search results. “Reusable components” of files are also searchable.
For “Reusable Components” to be parsed out of the files to be separate entities, a process needs to occur after Ingestion Auto Curration.
Content Creation application, user-entry tag/text fields should have “drop-down” access to the search index populated with auto/manual created tags.
Auto Curation and Intelligent Cloud Storage
The intelligence of Auto Curation should be built into the Cloud Storage Platform, e.g. potentially OneDrive.
At a minimum, auto curation should update the cloud storage platform indexing engine to correlate files and metadata.
Auto Curation is the ‘secret sauce’ that “digests” the content to build the search engine index, which contains identified objects (e.g. tag and text or coordinates) automatically
Auto Curation may leverage a rules engine (AI) and apply user configurable rules such as “keyword density” thresholds
Artificial Intelligence, Machine Learning rules may be applied to the content to derive additional labels/tags.
If leveraging version control of the intelligent cloud storage platform, each iteration should “re-index” the content, and update the Auto Curation metadata tags. User-created tags are untouched.
If no user-defined labels/tags exist, upon ingestion, the user may be prompted for tags
Auto Curation and “3rd Party” Sources
In the context of sources such as a Twitter feed, there exists no incorporation of feeds into an Intelligent Cloud Storage. OneDrive, Cloud Intelligent Storage may import feeds from 3rd party sources, and each Tweet would be defined as an object which is searchable along with its metadata (e.g. likes; tags).
Operating System, Intelligent Cloud Storage/DAM
The Intelligent Cloud Storage and DAM solutions should have integrated search capabilities, so on the OS (mobile or desktop) level, the discovery of content through the OS search of tagged metadata is possible.
OneDrive has no ability to search Microsoft Word tags
The UI for all Productivity Tools must have a comprehensive and simple design for leveraging an existing taxonomy for manual tagging, and the ability to add hints for auto curation
Currently, Microsoft Word has two fields to collect metadata about the file. It’s obscurely found at the “Save As” dialog.
The “Save As” dialogue box allows a user to add tags and authors but only when using the MS Word desktop version. The Online (Cloud) version of Word has no such option when saving to Microsoft OneDrive Cloud Storage
Auto Curation (Artificial Intelligence, AI) must inspect the MS Productivity suite tools, and extract tags automatically which does not exist today.
No manual taging or Auto Curation/Facial Recognition exists.
Advice is integrated within the application, proactive and reactive: When searching in Microsoft Edge, a blinking circle representing Cortana is illuminated. Cortana says “I’ve collected similar articles on this topic.” If selected, presents 10 similar results in a right panel to help you find what you need.
Personal Data Access and Management
The user can vocally access their personal data, and make modifications to that data; E.g. Add entries to their Calendar, and retrieve the current day’s agenda.
Platform Capabilities: Mobile Phone Advantage
Strengthen core telephonic capabilities where competition, Amazon and Microsoft, are relatively week.
Ability to record conversations, and push/store content in Cloud, e.g. iCloud. Cloud Serverless recording mechanism dynamically tags a conversations with “Keywords” creating an Index to the conversation. Users may search recording, and playback audio clips +/- 10 seconds before and after tagged occurrence.
Calls into the User’s Smartphones May Interact Directly with the Digital Assistant
Call Screening – The digital assistant asks for the name of the caller, purpose of the call, and if the matter is “Urgent”
A generic “purpose” response, or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
The smartphone user may decide to accept the call, or reject the call and send the caller to voice mail.
A caller may ask to schedule a meeting with the user, and the digital assistant may access the user’s calendar to determine availability. The digital assistant may schedule a ‘tentative’ appointment within the user’s calendar.
If calendar indicates availability, a ‘tentative’ meeting will be entered. The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of “generally available” information, and provide it. The Smartphone user may use applications like Google Keep, and any note tagged with a label “Open Access” may be accessible to any caller.
Custom business workflows may be triggered through the smartphone, such as “Pay by Phone”. When a caller is calling a business user’s smartphone, the call goes to “voice mail” or “digital assistant” based on smartphone user’s configuration. If the user reaches the “Digital Assistant”, there may be a list of options the user may perform, such as “Request for Service” appointment. The caller would navigate through a voice recognition, one of many defined by the smartphone users’ workflows.
Platform Capabilities: Mobile Multimedia
Either through your mobile Smartphone, or through a portable speaker with voice recognition (VR).
Streaming media / music to portable device based on interactions with Digital Assistant.
Menu to navigate relevant (to you) news, and Digital Assistant to read articles through your portable media device (without UI)
Third Party Partnerships: Adding User Base, and Expanding Capabilities
In the form of platform apps (abstraction), or 3rd party APIs which integrate into the Digital Assistant, allowing users to directly execute application commands, e.g. Play Spotify song, My Way by Frank Sinatra.
Any “Skill Set” with specialized knowledge: direct Q&A or instructional guidance – e.g Home Improvement, Cooking
eCommerce Personalized Experience – Amazon
Home Automation – doors, thermostats
Music – Spotify
Navigate Set Top Box (STB) – e.g. find a program to watch
Video on Demand (VOD) – e.g. set to record entertainment
Excellent presentations from the Hortonworks team for “NiFi on HDF” solutions architecture and best practices. Powerful solution to process and distribute data in real-time, any data, and in large quantities with resiliency. It’s no wonder why the US NSA originally developed the ability to consume data in real-time, manipulate it, and then send it on it’s way. However, recognizing the commercial applications (benevolent wisdom?), the NSA released the product as open-source software, via its technology transfer program.
As a tangent, among other things, I’m currently exploring the capabilities of “Microsoft Flow“, which has recently been promoted to GA from their ‘Preview Release’. One resonating question came to mind during the presentations last night:
At it’s peak maturity (not yet), can Microsoft Flow successfully compete with Apache NiFi on Hortonworks HDF?
The NiFi / HDF solution manages data flows in real-time. The Microsoft Flow architecture seems to fall short in this capacity. Is it on the product road map for Flow? Is it a capability Microsoft wants to have?
There a bit of architecture / infrastructure on the Hortonworks HDF side, which enables the solution as a whole to be able to ingest, process, and push the data in real-time. Not sure Microsoft Flow is currently engineered on the back end to handle the throughput.
The current Microsoft Flow UI may need to be updated to handle this ‘slightly altered’ paradigm of real-time content consumption and distribution.
The comparison between Microsoft Flow and NiFi on HDF may be a huge stretch for comparison.
Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour. Despite the name, it does not actually involve running code without servers. Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run .
Based on your application Use Case(s), Cloud Serverless Computing architecture may reduce ongoing costs for application usage, and provide scalability on demand without the Cloud Server Instance management overhead, i.e. costs and effort.
Note: Cloud Serverless Computing is used interchangeability with Functions as a service (FaaS) which makes sense from a developer’s standpoint as they are coding Functions (or Methods), and that’s the level of abstraction.
Create automated workflows between apps and services to get notifications, synchronize files, collect data, and more. Although not the traditional Serverless Computing implementation, it’s the quickest way to perform application services without having to procure the application servers. Depending on your microservices (connectors + templates) definitions, you may not need to write a single line of code, and could all be done through the Flow console.
Connectors are “enablers” to connect to [data] sources in order to extract or insert data, typically one Connector per service, such as Twitter.
Templates utilize Connectors, and enable workflow designers to build business process workflows. Execution of the manufactured workflows performs the activities either Event trigger driven, or ADHOC / manual execution through the portal or through the Microsoft Flow mobile apps.
154 Service Connectors Exist. Several “Premium” connectors require monthly nominal fee (5 USD). For example, using the Oracle Database Connecter empowers the workflow designer insert, update, select, and delete rows in a table.
Automating business processes by designing workflows to turn repetitive tasks into multi-step workflows
Microsoft Flow Pricing
As listed below, there are three tiers, which includes a free tier for personal use or exploring the platform for your business. The pay Flow plans seem ridiculously inexpensive based on what business workflow designers receive for the 5 USD or 15 USD per month. Microsoft Flow has abstracted building workflows so almost anyone can build application workflows or automate business manual workflows leveraging almost any of the popular applications on the market.
It doesn’t seem like 3rd party [data] Connectors and Template creators receive any direct monetary value from the Microsoft Flow platform. Although workflow designers and business owners may be swayed to purchase 3rd party product licenses for the use of their core technology.
Properly designed microservices have a single responsibility and can independently scale. With traditional applications being broken up into 100s of microservices, traditional platform technologies can lead to significant increase in management and infrastructure costs. Google Cloud Platform’s serverless products mitigates these challenges and help you create cost-effective microservices.
AWS provides a set of fully managed services that you can use to build and run serverless applications. You use these services to build serverless applications that don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you, allowing you to focus on product innovation and get faster time-to-market. It’s important to note that Amazon was the first contender in this space with a 2014 product launch.
Execute code on demand in a highly scalable serverless environment. Create and run event-driven apps that scale on demand.
Focus on essential event-driven logic, not on maintaining servers
Integrate with a catalog of services
Pay for actual usage rather than projected peaks
The OpenWhisk serverless architecture accelerates development as a set of small, distinct, and independent actions. By abstracting away infrastructure, OpenWhisk frees members of small teams to rapidly work on different pieces of code simultaneously, keeping the overall focus on creating user experiences customers want.
Serverless Computing is a decision that needs to be made based on the usage profile of your application. For the right use case, serverless computing is an excellent choice that is ready for prime time and can provide significant cost savings.
Protecting the Data Warehouse with Artificial Intelligence
Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos. Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight. In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls. Architecture also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.
Key Features of iGuard:
Policy engine prevents “bad” queries before reaching database
Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
Create policies using policy templates based on SQL Syntax:
Require JOIN to Security Table
Column Combination Restriction – Ex. Prevents combining customer name and social security #
Table JOIN restriction – Ex. Prevents joining two different tables in same query
Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
By user or user groups and time of day (shift) (e.g. ETL)
Blocks connections to the database
White list or black list by
DB User Logins
OS User Logins
Applications (BI, Query Apps)
Rule Templates Contain Customizable Messages
Each of the “Policy Templates” has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.
Machine Learning: Curbing Inappropriate, or Long Running Queries
iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics. The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process. New rules will be suggested which exceed these defined parameters. The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.
A relatively new medium of support for businesses small to global conglomerates becomes available based on the exciting yet embryonic [Chabot] / Digital Agent services. Amazon and Microsoft, among others, are diving into this transforming space. The coat of paint is still wet on Amazon Lex and Microsoft Cortana Skills. MSFT Cortana Skills Kit is not yet available to any/all developers, but has been opened to a select set of partners, enabling them to expand Cortana’s core knowledge set. Microsoft’s Bot Framework is in “Preview” phase. However, the possibilities are extensive, such as another tier of support for both of these companies, if they turn on their own knowledge repositories using their respective Digital Agents [Chabot] platforms.
Approach from Inception to Deployment
The curation and creation of knowledge content may occur with the definition of ‘Goals/Intents’ and their correlated human utterances which trigger the Goal Question and Answer (Q&A) dialog format. Classic Use Case. The question may provide an answer with text, images, and video.
Taking Goals/Intents and Utterances to ‘the next level’ involves creating / implementing Process Workflows (PW). A workflow may contain many possibilities for the user to reach their goal with a single utterance triggered. Workflows look very similar to what you might see in a Visio diagram, with multiple logical paths. Instead of presenting users with the answer based upon the single human utterance, the question, the workflow navigates the users through a narrative to:
disambiguate the initial human utterance, and get a better understanding of the specific user goal/intention. The user’s question to the Digital Agent may have a degree of ambiguity, and workflows enable the AI Digital Agent to determine the goal through an interactive dialog/inspection. The larger the volume of knowledge, and the closer the goals/intentions, the implementation would require disambiguation.
interactive conversation / dialog with the AI Digital Agent, to walk through a process step by step, including text, images, and Video inline with the conversation. The AI chat agent may pause the ‘directions’ waiting for the human counterpart to proceed.
Amazon to provide billing and implementation / technical support for AWS services through a customized version of their own AWS Lex service? All the code used to provide this Digital Agent / Chabot maybe ‘open source’ for those looking to implement similar [enterprise] services.
Digital Agent may allow the user to share their screen, OCR the current section of code from an IDE, and perform a code review on the functions / methods.
Microsoft has an ‘Online Chat’ capability for MSDN. Not sure how extensive the capability is, and if its a true 1:1 chat, which they claim is a 24/7 service. Microsoft has libraries of content from Microsoft Docs, MSDN, and TechNet. If the MSFT Bot framework has the capability to ingest their own articles, users may be able to trigger these goals/intents from utterances, similar to searching for knowledge base articles today.
Abstraction, Abstraction, Abstraction. These AI Chatbot/Digital Agents must float toward Wizards to build and deploy, and attempt to stay away from coding. Elevating this technology to be configurable by a business user. Solutions have significant possibilities for small companies, and this technology needs to reach their hands. It seems that Amazon Lex is well on their way to achieving the wizard driven creation / distribution, but have ways to go. I’m not sure if the back end process execution, e.g. Amazon Lambda, will be abstracted any time soon.
Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products. The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.
As a first step, Google’s “Personal” Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data. No doubt other Google products are coming soon.
Big benefits are not just for the consumer to search through their Personal Goggle data, but provide that consolidated view to the AI Assistant. Does the Google [Digital] Assistant already have access to Google Keep data, for example. Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage? If so, these…
interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.
“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”
I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.
Modeling Personal, AI + Human Interactions
Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW). Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.
Actionable data point correlations can trigger multiple goals in parallel. However, the execution of goal based rules would need to be managed. The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.
As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.
The proactive Google Assistant may suggest to book your annual fishing trip soon. Elevated Interaction to Consumer / User.
The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search. Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category. Building workflows that are ‘predefined’ for a core set of goals/rules.
AI Assistant may search user’s photo archive on the server side. Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
Photos from around ‘August’ may be earmarked for Assistant use
Photos may be geo tagged, e.g. Lake Champlain, which is known for its fishing.
All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository, goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.
This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process. Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.
Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” . Or, food may not be the best place to start, i.e. Amazon Dash