Journey Maps are excellent as a tool for deriving requirements, as well as better understanding the customer. Similar to a paper-based, use case process to understand an “Actor” on their business workflow, journey maps visualize the customer/user experiences. The article below is a primer to the creation and usage of a Journey Map.
Summary: Journey maps combine two powerful instruments—storytelling and visualization—in order to help teams understand and address customer needs. While maps take a wide variety of forms depending on context and business goals, certain elements are generally included, and there are underlying guidelines to follow that help them be the most successful.
What Is a Customer Journey Map?
In its most basic form, journey mapping starts by compiling a series of user goals and actions into a timeline skeleton. Next, the skeleton is fleshed out with user thoughts and emotions in order to create a narrative. Finally, that narrative is condensed into a visualization used to communicate insights that will inform design processes.
Although I’ve been a huge fan of PlanningPoker.com since 2011, my Scrum Product team consisted of more than five members, and their Free Membership allows up to 5 users. The team I was working with had just started their agile transformation and was trying out aspects of Agile / Scrum they wanted to adopt. They weren’t about to make the investment in Planning Poker for estimations quite yet, so I stumbled across an estimation tool as a free add-on to Azure DevOps.
Microsoft’s Azure DevOps solution is both a code and requirements repository in one. Requirements are managed from an Agile perspective, through a Product Backlog of user stories. The user story backlog item type contains a field called “Story Points”, or sometimes configured as “Effort”.
Ground Rules – 50k Overview
All team members select from a predetermined relative effort scale, such as Tee Shirt Sizes (XS, S, M, L, XL) or Fibonacci sequence (0, 1/2, 1, 2, 3, 5, 8, 13, 21, 34…) All selections of team members are hidden until the facilitator decides to expose/flip all team selections at once. Flipping at once should help to remove natural biases, such as selecting the same value as the team tech lead’s selection. After that, there’s a team discussion to normalize the value into an agreed selection, such as the average value.
Integration with Azure DevOps
The interesting thing about this estimation tool is you can explicitly select stories to perform the effort estimation process right from the backlog, and in turn, once the team agrees upon a value, it can be committed to the User Story in the Backlog. No jumping between user stories, updating and saving field values. All performed from the effort estimation tool.
Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour. Despite the name, it does not actually involve running code without servers. Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run .
Based on your application Use Case(s), Cloud Serverless Computing architecture may reduce ongoing costs for application usage, and provide scalability on demand without the Cloud Server Instance management overhead, i.e. costs and effort.
Note: Cloud Serverless Computing is used interchangeability with Functions as a service (FaaS) which makes sense from a developer’s standpoint as they are coding Functions (or Methods), and that’s the level of abstraction.
Create automated workflows between apps and services to get notifications, synchronize files, collect data, and more. Although not the traditional Serverless Computing implementation, it’s the quickest way to perform application services without having to procure the application servers. Depending on your microservices (connectors + templates) definitions, you may not need to write a single line of code, and could all be done through the Flow console.
Connectors are “enablers” to connect to [data] sources in order to extract or insert data, typically one Connector per service, such as Twitter.
Templates utilize Connectors, and enable workflow designers to build business process workflows. Execution of the manufactured workflows performs the activities either Event trigger driven, or ADHOC / manual execution through the portal or through the Microsoft Flow mobile apps.
154 Service Connectors Exist. Several “Premium” connectors require monthly nominal fee (5 USD). For example, using the Oracle Database Connecter empowers the workflow designer insert, update, select, and delete rows in a table.
Automating business processes by designing workflows to turn repetitive tasks into multi-step workflows
Microsoft Flow Pricing
As listed below, there are three tiers, which includes a free tier for personal use or exploring the platform for your business. The pay Flow plans seem ridiculously inexpensive based on what business workflow designers receive for the 5 USD or 15 USD per month. Microsoft Flow has abstracted building workflows so almost anyone can build application workflows or automate business manual workflows leveraging almost any of the popular applications on the market.
It doesn’t seem like 3rd party [data] Connectors and Template creators receive any direct monetary value from the Microsoft Flow platform. Although workflow designers and business owners may be swayed to purchase 3rd party product licenses for the use of their core technology.
Properly designed microservices have a single responsibility and can independently scale. With traditional applications being broken up into 100s of microservices, traditional platform technologies can lead to significant increase in management and infrastructure costs. Google Cloud Platform’s serverless products mitigates these challenges and help you create cost-effective microservices.
AWS provides a set of fully managed services that you can use to build and run serverless applications. You use these services to build serverless applications that don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you, allowing you to focus on product innovation and get faster time-to-market. It’s important to note that Amazon was the first contender in this space with a 2014 product launch.
Execute code on demand in a highly scalable serverless environment. Create and run event-driven apps that scale on demand.
Focus on essential event-driven logic, not on maintaining servers
Integrate with a catalog of services
Pay for actual usage rather than projected peaks
The OpenWhisk serverless architecture accelerates development as a set of small, distinct, and independent actions. By abstracting away infrastructure, OpenWhisk frees members of small teams to rapidly work on different pieces of code simultaneously, keeping the overall focus on creating user experiences customers want.
Serverless Computing is a decision that needs to be made based on the usage profile of your application. For the right use case, serverless computing is an excellent choice that is ready for prime time and can provide significant cost savings.
The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties. The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types. For example, determine if a person [object] is “Joyful” or “Angry”.
As of the writing of this article, search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results. Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web. From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item. From there, an additional icon appears, a camera with the hint text “Search by Image”. Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.
Select the “Upload an Image” tab, choose a file, and upload. I used a fictional character, Max Headroom. The search results were very good (see below). I also attempted an uncommon shape, and it did not meet my expectations. The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model. If the shape is “Unique” the Google Search Image Engine did it’s job.
Google Image Search Results – Max Headroom
Google Image Search Results – Odd Shaped Metal Object
The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good. However I would have liked to see better matches under the “Visually Similar Image” section. Again, this is probably due to the image classification process, and potentially the diversity of image samples.
A Few Questions for Google
How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested? How are new images incorporated into the Classifier model? Are the user uploaded images now included in the Model (after model training is run again)? Is Google Search Image incorporating ALL Internet images into Classifier Model(s)? Is an alternate AI Image Recognition process used beyond Classifier Models?
I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting. After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image. I tried a number of samples, including my odd shaped metal, and I uploaded the image. I think it performed fairly well on the “labels” (i.e. image attributes)
Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results. In contrast, using Google’s Search Image Engine produced some “similar” web results.
Finally, I tested the Google Cloud Vision API with a self portrait image. THIS was so cool.
The API brought back several image attributes specific to “Faces”. It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.
The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.
Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.
The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %. Of course, I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)
Are you ready for a challenge, and 150,000 USD to begin to pursue your challenge?
That’s just SBIR Phase I, Concept Development (~6 months). The second phase, Prototype Development, may be funded up to 1 MM USD, and last 24 months.
The Small Business Innovation Research (SBIR) program is a highly competitive program that encourages domestic small businesses to engage in Federal Research/Research and Development (R/R&D) that has the potential for commercialization. Through a competitive awards-based program, SBIR enables small businesses to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s R&D arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs.
The program’s goals are four-fold:
Stimulate technological innovation.
Meet Federal research and development needs.
Foster and encourage participation in innovation and entrepreneurship by socially and economically disadvantaged persons.
Increase private-sector commercialization of innovations derived from Federal research and development funding.
According to CNBC’s “Mad Money” host Jim Cramer, Salesforce was turned off by a more fundamental problem that’s been hurting Twitter for years: trolls.
“What’s happened is, a lot of the bidders are looking at people with lots of followers and seeing the hatred,” Cramer said on CNBC’s “Squawk on the Street,” citing a recent conversation with Benioff. “I know that the haters reduce the value of the company…I know that Salesforce was very concerned about this notion.”
…Twitter’s troll problem isn’t anything new if you’ve been following the company for a while.”
Anyone with a few neurons will recognize that bots on Twitter are a huge turnoff in some cases. I like periodic famous quotes as much as the next person, but it seems like bots have invaded Twitter for a long time, and becomes a detractor to using the platform. The solution in fact is quite easy, reCAPTCHA. a web application that determines if the user is a human and not a robot. Twitter users should be required to use an integrated reCAPTCHA Twitter DM, and/or as a “pinned”reCAPTCHA tweet that sticks to the top of your feed, once a calendar week, and go through the “I’m not a robot” quick and easy process.
Additionally, an AI rules engine may identify particular patterns of Bot activity, flag it, and force the user to go through the Human validation process within 24 hours. If users try to ‘get around’ the Bot\Human identification process, maybe by tweaking their tweets, Google may employ AI machine learning algorithms to feed the “Bot” AI rules engine patterns.
Every Twitter user identified as “Human” would have the picture of the “Vitruvian Man” by Leonardo da Vinci miniaturized, and placed next to the “Verified Account” check mark. Maybe there’s a fig leaf too.
In addition, the user MAY declare it IS a bot, and there are certainly valid reasons to utilize bots. Instead of the “Man” icon, Twitter may allow users to pick the bot icon, including the character from the TV show “Futurama”, Bender miniaturized. Twitter could collect additional information on Bots for enhanced user experience, e.g. categories and subcategories
reCAPTCHA is owned by Google, so maybe, in some far out distant universe, a Doppelgänger Google would buy Twitter, and either phase out or integrate G+ with Twitter.
If trolls/bots are such a huge issue, why hasn’t Twitter addressed it? What is Google using to deal with the issue?
The prescribed method seems too easy and cheap to implement, so I must be missing something. Politics maybe? Twitter calling upon a rival, Google (G+) to help craft a solution?
At this stage in the application platform growth and maturity of the AI Personal Assistant, there are many commands and options that common users cannot formulate due to a lack of knowledge and experience. Using Natural Language to formulate questions has gotten better over the years, but assistance / guidance formulating the requests would maximize intent / goal accuracy.
A key usability feature for many integrated development environments (IDE) are their capability to use “Intelligent Code Completion” to guide their programmers to produce correct, functional syntax. This feature also enables the programmer to be unburdened by the need to look up syntax for each command reference, saving significant time. As the usage of the AI Personal Assistant grows, and their capabilities along with it, the amount of commands and their parameters required to use the AI Personal Assistant will also increase.
AI Leveraging Intelligent Command Completion
For each command parameter [level\tree], a drop down list may appear giving users a set of options to select for the next parameter. A delimiter such as a period(.) indicates to the AI Parser another set of command options must be presented to the person entering the command. These options are typically in the form of drop down lists concatenated to the right of the formulated commands. Vocally, parent / child commands and parameters may be supplied in a similar fashion.
AI Personal Assistant Language Syntax
Adding another AI parser on top of the existing syntax parser may allow commands like these to be executed:
These AI command examples uses a hierarchy of commands and parameters to perform the function. One of the above commands leverages one of my contacts, and a ‘List123’ object. The ‘List123’ parameter may be a ‘note’ on my Smartphone that contains a list of food we would like to order. The command may place the order either through my contact’s email address, fax number, or calling the business main number and using AI Text to Speech functionality.
All personal data, such as Favorite Italian Restaurant, and Favorite Lunch Special could be placed in the AI Personal Assistant ‘Settings’. A group of settings may be listed as Key-Value pairs, that may be considered short hand for conversations involving the AI Assistant.
A majority of users are most likely unsure of many of the options available within the AI Personal assistant command structure. Intelligent command [code] completion empowers users with visibility into the available commands, and parameters.
For those without a programming background, Intelligent “Command” Completion is slightly similar to the autocomplete in Google’s Search text box, predicting possible choices as the user types. In the case of the guidance provided by an AI Personal Assistant the user is guided to their desired command; however, the Google autocomplete requires some level or sense of the end result command. Intelligent code completion typically displays all possible commands in a drop down list next to the constructor period (.). In this case the user may have no knowledge of the next parameter without the drop down choice list. An addition feature enables the AI Personal Assistant to hover over one of the commands\parameters to show a brief ‘help text’ popup.
Note, Microsoft’s Cortana AI assistant provides a text box in addition to speech input. Adding another syntax parser could be allowed and enabled through the existing User Interface. However, Siri seems to only have voice recognition input, and no text input.
Is Siri handling the iOS ‘Global Search’ requests ‘behind the scenes’? If so, the textual parsing, i.e. the period(.) separator would work. Siri does provide some cursory guidance on what information the AI may be able to provide, “Some things you can ask me:”
With only voice recognition input, use the Voice Driven Menu Navigation & Selection approach as described below.
Voice Driven, Menu Navigation and Selection
The current AI personal assistant, abstraction layer may be too abstract for some users. The difference between these two commands:
Play The Rolling Stones song Sympathy for the Devil.
Has the benefit of natural language, and can handle simple tasks, like “Call Mom”
However, there may be many commands that can be performed by a multitude of installed platform applications.
Spotify.Song.Sympathy for the Devil
Enables the user to select the specific application they would like a task to be performed by.
A voice driven menu will enable users to understand the capabilities of the AI Assistant. Through the use of a voice interactive menu, users may ‘drill down’ to the action they desire to be performed. e.g. “Press # or say XYZ”
Optionally, the voice menu, depending upon the application, may have a customer service feature, and forward the interaction to the proper [calling or chat] queue.
Update – 9/11/16
I just installed Microsoft Cortana for iOS, and at a glance, the application has a leg up on the competition
The Help menu gives a fair number of examples by category. Much better guidance that iOS / Siri
The ability to enter\type or speak commands provides the needed flexibility for user input.
Some people are uncomfortable ‘talking’ to their Smartphones. Awkward talking to a machine.
The ability to type in commands may alleviate voice command entry errors, speech to text translation.
Opportunity to expand the AI Syntax Parser to include ‘programmatic’ type commands allows the user a more granular command set, e.g. “Intelligent Command Completion”. As the capabilities of the platform grow, it will be a challenge to interface and maximize AI Personal Assistant capabilities.
Review of Microsoft OneDrive Cloud Repository. It may be an easy tool and service(s) to save files. If you know what you roughly want to find, most Cloud repositories are easy and straight forward to use. Over time, if not managed appropriately, the cloud repository becomes burdensome to manage, e.g. access and find files. If stuck in the “file folder organization storage” mentality of organizing our content, our Cloud storage solution will become quickly unyielding. Getting into habits like tagging your content should help us to access files beyond the “Folder Borders”. To the contrary, there are huge opportunities to leverage and grow existing platforms, specifically around the process service of [file] Ingestion.
Bulk file loading, e.g. photos from our smartphones, maybe the entire family uploads to the same storage repository
If performed by the “Ingestion Service”, manual user “tagging” of a group of photos, or individual images may be available.
Geotagging may be available either at the time of image capture , or upon the start of the “Ingestion Service”
Facial Recognition, compared to the likes of services such as Facebook, based on my experience, are not readily available to personal Cloud Storage repositories.
Auto tagging pictures upon ingestion, if performed, may leverage “Extracted Text” from images. Images become searchable with little human intervention.
Cloud File Repository: Storing Content
I created modified existing Microsoft Office files”tags”, in this case MS Word and PowerPoint file types were used. I opened the Word file, and selected “File” menu, “Save As” menu, then “More Options” under the list of file types. I was then presented with the classic “Save As” form. Just below the “Save as type” list box, there were 3 “metadata” fields to describe the file:
The first two fields are semi colon ; delimited and multiple values are allowed. In this test case, I added to the “Tags” field “CV;resume;career”. I then used the MS Windows Snipping Tool that comes with the OS to document the step. I called the file MSWordTags.PNG and saved this screen capture to my OneDrive. Then I saved the document itself on my OneDrive.
Cloud File Repository: Finding Content
I then started up Internet Explorer, and went to the https://onedrive.live.com site to access my cloud content. On the top left corner of the screen, there is a field called “Search Everything”, and I typed in CV.
The search results included ONLY the image screenshot file that contained the letters CV, and not the MS Word file that explicitly had the Tag field with the text value CV.
Looking at the file properties as defined by OneDrive, there was ALSO a field called “Tags” with no values populated. For example, the Cloud “Ingestion” service did not read the file for metadata, and abstract it to the Cloud level. just two separate sets of metadata describing the same file. To view the Cloud file data, select the file, and there is an i with a circle around it. Too many ways to store the same data, and may lead to inconsistent data.
For the Cloud file information / properties, the image file had a field called “Extracted Text”, and this is how the search picked up the CV value in the Cloud Search for my files with the “CV” tag.
Oddly, the MS Word file attributes in OneDrive did not offer “tags” as a field to store meta data in the cloud. The “tags” field was available when looking at the PNG file. However, the user may add a “Description” in a multiline text field. Tags metadata on images and not MS Word files? Odd.
Future State (?): If the Cloud Ingestion process can perform an “Extracted Text” process, it may also have other “Ingestion services”, such as “Facial Recognition” from “known good” faces already tagged. e.g. I tag a face from within the OneDrive browser UI, and now when other images are ingested, there can be a correlation between the files.
As a business model, are we going to add a tier just after Cloud File ingestion, maybe exercise a third party suite of cognitive APIs, such as facial recognition? For example, Microsoft OneDrive Ingests a file, and if it’s an image file, routes through to the appropriate IBM Watson API, processes the file, and returns [updated] metadata, and a modified file? Maybe.
Update: Auto Tagging Objects Upon Ingestion
On an image with no tags, I selected the “Edit tags” menu from the Properties pane on the right side of the screen. As a scrolling menu, the option to “Add existing tag” appeared. There were dozens of tags already created with a word, thumbnail image, and the number of times used. Wow. Awesome. The current implementation seems to automatically, upon ingestion, identify objects in the image, and tag the images with those objects, e.g. Building, Beach, Horse, etc.
Presumption that Microsoft OneDrive performs object recognition on images upon file ingestion into the cloud (as opposed to in the Photos app).
“Extracted Text ” Metadata Field from within Microsoft OneDrive Image PNG File Properties:
Presumption that Microsoft OneDrive performs OCR on images upon file ingestion into the cloud (as opposed to the Photos app).