Are you adequately prepared for your next litigation? Going into court with an army of Co-Counsel making you feel more confident, more prepared? Make sure you bring along the AI Whispering Digital Co-Counsel. Co-Counsel that doesn’t break a sweat, get nervous, and is always prepared. He even takes the opportunity to learn while on the job, machine learning.
The whispering digital agent for advising litigators “just-in-time” rebuttal citing historical precedence, for example. Digital Co-Counsel analyzes the dialog within the courtroom to identify ‘goals’, the intent of the conversation(s). The Digital Co-Counsel identifies the current workflow, which may be identified as Cross or Direct examination, Opening Statement, and Closing Argument.
Realtime observation of a court case and advice based on:
Observed dialog interactions between all parties involved in the case, such as opposing counsel, witnesses, subject matter experts, may trigger “guidance” from the Digital Co-Counsel based on a compound of utterances, and identified workflow.
Court case evidence submitted may be digitized, and analyzed based on a [predetermined]combination of identified attributes of submitted evidence. This evidence, in turn, may be rebutted, by counter arguments, alternate ‘perspectives’ or present “evidence” to rebut
The introduction of ‘bias’ toward the opposing council.**
Implementation of the Digital Co-Council may be through a Smartphone application, and use a bluetooth throughout the case.
My opinions are my own, and do not necessarily reflect my employer’s viewpoint.
Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.
Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.
Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.
Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.
Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.
Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors. Workaround, of course, using service accounts with vastly higher folder quota / size.
Aren’t AI Digital Assistants just like Search Engines? They both try to recognize your question or human utterance as best as possible to serve up your requested content. E.g.classic FAQ. The difference in the FAQ use case is the proprietary information from the company hosting the digital assistant may not be available on the internet.
Another difference between the Digital Assistant and a Search Engine is the ability of the Digital Assistant to ‘guide’ a person through a series of questions, enabling elaboration, to provide the user a more precise answer.
The Digital Assistant may use an interactive dialog to guide the user through a process, and not just supply the ‘most correct’ responses. Many people have flocked to YouTube for instructional type of interactive medium. When multiple workflow paths can be followed, the Digital Assistant has the upper hand.
The Digital Assistant has the capability of interfacing with 3rd parties (E.g. data stores with API access). For example, there may be a Digital Assistant hosted by Medical Insurance Co that has the ability to not only check the status of a claim, but also send correspondence to a medical practitioner on your behalf. A huge pain to call the insurance company, then the Dr office, then the insurance company again. Even the HIPPA release could be authenticated in real time, in line during the chat. A digital assistant may be able to create a chat session with multiple participants.
Digital Assistants overruling capabilities over Search Engines are the ability to ‘escalate’ at any time during the Digital Assistant interaction. People are then queued for the next available human agent.
According to CNBC’s “Mad Money” host Jim Cramer, Salesforce was turned off by a more fundamental problem that’s been hurting Twitter for years: trolls.
“What’s happened is, a lot of the bidders are looking at people with lots of followers and seeing the hatred,” Cramer said on CNBC’s “Squawk on the Street,” citing a recent conversation with Benioff. “I know that the haters reduce the value of the company…I know that Salesforce was very concerned about this notion.”
…Twitter’s troll problem isn’t anything new if you’ve been following the company for a while.”
Anyone with a few neurons will recognize that bots on Twitter are a huge turnoff in some cases. I like periodic famous quotes as much as the next person, but it seems like bots have invaded Twitter for a long time, and becomes a detractor to using the platform. The solution in fact is quite easy, reCAPTCHA. a web application that determines if the user is a human and not a robot. Twitter users should be required to use an integrated reCAPTCHA Twitter DM, and/or as a “pinned”reCAPTCHA tweet that sticks to the top of your feed, once a calendar week, and go through the “I’m not a robot” quick and easy process.
Additionally, an AI rules engine may identify particular patterns of Bot activity, flag it, and force the user to go through the Human validation process within 24 hours. If users try to ‘get around’ the Bot\Human identification process, maybe by tweaking their tweets, Google may employ AI machine learning algorithms to feed the “Bot” AI rules engine patterns.
Every Twitter user identified as “Human” would have the picture of the “Vitruvian Man” by Leonardo da Vinci miniaturized, and placed next to the “Verified Account” check mark. Maybe there’s a fig leaf too.
In addition, the user MAY declare it IS a bot, and there are certainly valid reasons to utilize bots. Instead of the “Man” icon, Twitter may allow users to pick the bot icon, including the character from the TV show “Futurama”, Bender miniaturized. Twitter could collect additional information on Bots for enhanced user experience, e.g. categories and subcategories
reCAPTCHA is owned by Google, so maybe, in some far out distant universe, a Doppelgänger Google would buy Twitter, and either phase out or integrate G+ with Twitter.
If trolls/bots are such a huge issue, why hasn’t Twitter addressed it? What is Google using to deal with the issue?
The prescribed method seems too easy and cheap to implement, so I must be missing something. Politics maybe? Twitter calling upon a rival, Google (G+) to help craft a solution?
The AI personal assistant with the “most usage” spanning connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives. An Amazon commercial just aired which depicted a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her. The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.
What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:
Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
Proactive, suggestive actions may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.
Three main sources of AI Personal Assistant value add:
A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ; in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted. In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
An abstraction layer between the user and their apps; The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day rituals.
Proactive and chatty assistants may win the “Assistant of Choice” on all platforms. Being proactive means collecting data more often then when it’s just you asking questions ADHOC. Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location. E.g. “Don’t forget milk” says Siri, as your passing the grocery store. Around the time I leave work Google maps tells me if I have traffic and my ETA.
It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.
Microsoft Corona has an iOS app and widget, which is wrapped around the OS. Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.
Antitrust violation by mobile smartphone maker Apple: iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri, “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.
Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default. Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.
ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution. Has Apple has taken steps to tightly couple Siri with it’s iOS?
AI Personal Assistant ‘Wish” list:
Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions. The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps. After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”. There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders. In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
Capability to interact with IoT using user defined workflows. Hardware and software may exist in the Cloud.
Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.
Apple could already be making the changes as a natural course of their product evolution. Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.
At this stage in the application platform growth and maturity of the AI Personal Assistant, there are many commands and options that common users cannot formulate due to a lack of knowledge and experience .
A key usability feature for many integrated development environments (IDE) are their capability to use “Intelligent Code Completion” to guide their programmers to produce correct, functional syntax. This feature also enables the programmer to be unburdened by the need to look up syntax for each command reference, saving significant time. As the usage of the AI Personal Assistant grows, and their capabilities along with it, the amount of “command and parameters” knowledge required to use the AI Personal Assistant will also increase.
AI Leveraging Intelligent Command Completion
For each command parameter [level\tree], a drop down list may appear giving users a set of options to select for the next parameter. A delimiter such as a period(.) indicates to the AI Parser another set of command options must be presented to the person entering the command. These options are typically in the form of drop down lists concatenated to the right of the formulated commands.
AI Personal Assistant Language Syntax
Adding another AI parser on top of the existing syntax parser may allow commands like these to be executed:
These AI command examples uses a hierarchy of commands and parameters to perform the function. One of the above commands leverages one of my contacts, and a ‘List123’ object. The ‘List123’ parameter may be a ‘note’ on my Smartphone that contains a list of food we would like to order. The command may place the order either through my contact’s email address, fax number, or calling the business main number and using AI Text to Speech functionality.
All personal data, such as Favorite Italian Restaurant, and Favorite Lunch Special could be placed in the AI Personal Assistant ‘Settings’. A group of settings may be listed as Key-Value pairs, that may be considered short hand for conversations involving the AI Assistant.
A majority of users are most likely unsure of many of the options available within the AI Personal assistant command structure. Intelligent command [code] completion empowers users with visibility into the available commands, and parameters.
For those without a programming background, Intelligent “Command” Completion is slightly similar to the autocomplete in Google’s Search text box, predicting possible choices as the user types. In the case of the guidance provided by an AI Personal Assistant the user is guided to their desired command; however, the Google autocomplete requires some level or sense of the end result command. Intelligent code completion typically displays all possible commands in a drop down list next to the constructor period (.). In this case the user may have no knowledge of the next parameter without the drop down choice list. An addition feature enables the AI Personal Assistant to hover over one of the commands\parameters to show a brief ‘help text’ popup.
Note, Microsoft’s Cortana AI assistant provides a text box in addition to speech input. Adding another syntax parser could be allowed and enabled through the existing User Interface. However, Siri seems to only have voice recognition input, and no text input.
Is Siri handling the iOS ‘Global Search’ requests ‘behind the scenes’? If so, the textual parsing, i.e. the period(.) separator would work. Siri does provide some cursory guidance on what information the AI may be able to provide, “Some things you can ask me:”
With only voice recognition input, use the Voice Driven Menu Navigation & Selection approach as described below.
Voice Driven, Menu Navigation and Selection
The current AI personal assistant, abstraction layer may be too abstract for some users. The difference between these two commands:
Play The Rolling Stones song Sympathy for the Devil.
Has the benefit of natural language, and can handle simple tasks, like “Call Mom”
However, there may be many commands that can be performed by a multitude of installed platform applications.
Spotify.Song.Sympathy for the Devil
Enables the user to select the specific application they would like a task to be performed by.
A voice driven menu will enable users to understand the capabilities of the AI Assistant. Through the use of a voice interactive menu, users may ‘drill down’ to the action they desire to be performed. e.g. “Press # or say XYZ”
Optionally, the voice menu, depending upon the application, may have a customer service feature, and forward the interaction to the proper [calling or chat] queue.
Update – 9/11/16
I just installed Microsoft Cortana for iOS, and at a glance, the application has a leg up on the competition
The Help menu gives a fair number of examples by category. Much better guidance that iOS / Siri
The ability to enter\type or speak commands provides the needed flexibility for user input.
Some people are uncomfortable ‘talking’ to their Smartphones. Awkward talking to a machine.
The ability to type in commands may alleviate voice command entry errors, speech to text translation.
Opportunity to expand the AI Syntax Parser to include ‘programmatic’ type commands allows the user a more granular command set, e.g. “Intelligent Command Completion”. As the capabilities of the platform grow, it will be a challenge to interface and maximize AI Personal Assistant capabilities.
The 2016 Olympic opening ceremonies had just started, and I thought briefly about events I wanted to see. I’m not a huge fan of the Olympics mostly because of the time commitment. However, if I happen to be in front of the TV when the events are on, depending upon the event, I’m happy to watch, and can get drawn in easily.
As the Olympics unfolded, I caught a few minutes of an event here and there, just by happening to be in front of a TV. Searching for any particular event never crossed my mind, even with the ease and power behind several powerful search engines like Bing and Google. The widgets built into search engine’s results showing Olympic standings in line with other search results was a great time saver.
However, why oh why didn’t the broadcasting network NBC create a calendar of Olympic 2016 events that can easily be imported into either Google Calendar, or Microsoft Outlook? Even Star Trek fans are able to add a calendar to their Google Calendar for Star Dates.
Olympic ratings are hurting? Any one of these organizations could have created a shared calendar for all or a subset of Olympic events. Maybe you just want a calendar that shows all the aquatic events?
Olympic Team Sponsors from soda to fast food, why oh why did you paint your consumer goods with pictures of Javelin throwers and Swimmers, but didn’t put a QR code on the side of your containers that directs consumers to your sponsored team’s calendar schedule “importable” into Google Calendar, or Microsoft Outlook?
If sponsors, or the broadcasting network, NBC, would have created these shareable calendars, you now would had entered the personal calendars of the consumer. A calendar entry pop-up may not only display what current competition is being fought, the body of the event may also contain [URL] links to stream the event live. The body of the event may also contain links to each team player’s stats, and other interesting facts relating to the event.
Also, if a Team Sponsor is the one creating the custom calendar for the Olympic Events, like USA Swimming’s sponsor Marriott , the streaming live video events may now be controlled by the Sponsor, yes, all advertising during the streaming session would be controlled by the the Sponsor. All Marriott! The links in the team sponsor calendar entries may not only have their own streaming links to the live events, but include any feature rich, relevant related content.
All the millions sponsors spend, for an IT Project that could cost a fraction of their advertising budget, and add significant ROI, it boggles the mind why every sponsor isn’t out there doing this or something similar right now. The tech is relatively inexpensive, and readily available, so why not now? If you know of any implementations, please drop me a note.
One noted exception, the “Google app” [for the iPhone] leverages alerts for all types of things such as a warning on traffic conditions for your ride home to … the start of the Women’s beam Gymnastics Olympic event. Select the alert, and opens up a ‘micro’ portal with people competing in the event, a detailed list of athlete profiles, including picture, country of origin, and metals won. There is also a tab showing the event future schedule.
It looks like Microsoft created a generic workflow platform, product independent.
Microsoft has software solutions, like MS Outlook with an [email] rules engine built into Outlook. SharePoint has a workflow solution within the Sharepoint Platform, typically governing the content flowing through it’s system.
Microsoft Flow is a different animal. It seems like Microsoft has built a ‘generic’ rules engine for processing almost any event. The Flow product:
Start using the product from one of two areas: a) “My Flows” where I may view existing and create new [work]flows. b) “Activity”, that shows “Notifications” and “Failures”
Select “My Flows”, and the user may “Create [a workflow] from Blank”, or “Browse Templates”. MSFT existing set of templates were created by Microsoft, and also by a 3rd party implying a marketplace.
Select “Create from Blank” and the user has a single drop down list of events, a culmination events across Internet products. There is an implication there could be any product, and event “made compatible” with MSFT Flows.
The drop down list of events has a format of “Product – Event”. As the list of products and events grow, we should see at least two separate drop down lists, one for products, and a sub list for the product specific events.
Several Example Events Include:
“Dropbox – When a file is created”
“Facebook – When there is a new post to my timeline”
“Project Online – When a new task is created”
“RSS – When a feed item is published”
“Salesforce – When an object is created”
The list of products as well as there events may need a business analyst to rationalize the use cases.
Once an Event is selected, event specific details may be required, e.g. Twitter account details, or OneDrive “watch” folder
Next, a Condition may be added to this [work]flow, and may be specific to the Event type, e.g. OneDrive File Type properties [contains] XYZ value. There is also an “advanced mode” using a conditional scripting language.
There is “IF YES” and “IF NO” logic, which then allows the user to select one [or more] actions to perform
Several Action Examples Include:
“Excel – Insert Rows”
“FTP – Create File”
“Google Drive – List files in folder”
“Mail – Send email”
“Push Notification – Send a push notification”
Again, it seems like an eclectic bunch of Products, Actions, and Events strung together to have a system to POC.
The Templates list, predefined set of workflows that may be of interest to anyone who does not want to start from scratch. The UI provides several ways to filter, list, and search through templates.
Applicable to everyday life, from an individual home user, small business, to the enterprise. At this stage the product seems in Beta at best, or more accurately, just after clickable prototype. I ran into several errors trying to go through basic use cases, i.e. adding rules.
Despite the “Preview” launch, Microsoft has showed us the power in [work]flow processing regardless of the service platform provider, e.g. Box, DropBox, Facebook, GitHub, Instagram, Salesforce, Twitter, Google, MailChimp, …
Microsoft may be the glue to combine service providers who may / expose their services to MSFT Flow functionality.
e.g. Language:Translation; E.g.2. Visual Recognition;
WordPress – Create a Post
New text file dropped in specific folder on Box, DropBox, etc. being ‘monitored’ by MSFT flow [?] Additional code required by user for ‘polling’ capabilities
OR new text file attached, and emailed to specific email account folder ‘watched’ by MSFT Flow.
Event triggers – Automatic read of new text file
stylizing may occur if HTML coding used
Action – Post to a Blog
‘ANY’ Event occurs, a custom message is sent using Skype for a single or group of Skype accounts;
On several ‘eligible’ events, such as “File Creation” into Box, the file (or file shared URL) may be sent to the Skype account.
‘ANY’ Event occurs, a custom mobile text message is sent to a single or group of phone numbers.
Event occurs for “File Creation” e.g. into Box; after passing a “Condition”, actions occur:
IBM Watson Cognitive API, Text to Speech, occurs, and the product of the action is placed in the same Box folder.
Action: Using Microsoft Edge (powered by MSN), in the “My news feed” tab, enable action to publish “Cards”, such as app notifications
Challenges \ Opportunities \ Unknowns
3rd party companies existing, published [cloud; web service] APIs may not even need any modification to integrate with Microsoft Flow; however, business approval may be required to use the API in this manner,
It is unclear re: Flow Templates need to be created by the product owner, e.g. Telestream, or knowledgeable third party, following the Android, iOS, and/or MSFT Mobile Apps model.
It is unclear if the MSFT Flow app may be licensed individually in the cloud, within the 365 cloud suite, or offered for Home and\or Business?
Review of Microsoft OneDrive Cloud Repository. It may be an easy tool and service(s) to save files. If you know what you roughly want to find, most Cloud repositories are easy and straight forward to use. Over time, if not managed appropriately, the cloud repository becomes burdensome to manage, e.g. access and find files. If stuck in the “file folder organization storage” mentality of organizing our content, our Cloud storage solution will become quickly unyielding. Getting into habits like tagging your content should help us to access files beyond the “Folder Borders”. To the contrary, there are huge opportunities to leverage and grow existing platforms, specifically around the process service of [file] Ingestion.
Bulk file loading, e.g. photos from our smartphones, maybe the entire family uploads to the same storage repository
If performed by the “Ingestion Service”, manual user “tagging” of a group of photos, or individual images may be available.
Geotagging may be available either at the time of image capture , or upon the start of the “Ingestion Service”
Facial Recognition, compared to the likes of services such as Facebook, based on my experience, are not readily available to personal Cloud Storage repositories.
Auto tagging pictures upon ingestion, if performed, may leverage “Extracted Text” from images. Images become searchable with little human intervention.
Cloud File Repository: Storing Content
I created modified existing Microsoft Office files”tags”, in this case MS Word and PowerPoint file types were used. I opened the Word file, and selected “File” menu, “Save As” menu, then “More Options” under the list of file types. I was then presented with the classic “Save As” form. Just below the “Save as type” list box, there were 3 “metadata” fields to describe the file:
The first two fields are semi colon ; delimited and multiple values are allowed. In this test case, I added to the “Tags” field “CV;resume;career”. I then used the MS Windows Snipping Tool that comes with the OS to document the step. I called the file MSWordTags.PNG and saved this screen capture to my OneDrive. Then I saved the document itself on my OneDrive.
Cloud File Repository: Finding Content
I then started up Internet Explorer, and went to the https://onedrive.live.com site to access my cloud content. On the top left corner of the screen, there is a field called “Search Everything”, and I typed in CV.
The search results included ONLY the image screenshot file that contained the letters CV, and not the MS Word file that explicitly had the Tag field with the text value CV.
Looking at the file properties as defined by OneDrive, there was ALSO a field called “Tags” with no values populated. For example, the Cloud “Ingestion” service did not read the file for metadata, and abstract it to the Cloud level. just two separate sets of metadata describing the same file. To view the Cloud file data, select the file, and there is an i with a circle around it. Too many ways to store the same data, and may lead to inconsistent data.
For the Cloud file information / properties, the image file had a field called “Extracted Text”, and this is how the search picked up the CV value in the Cloud Search for my files with the “CV” tag.
Oddly, the MS Word file attributes in OneDrive did not offer “tags” as a field to store meta data in the cloud. The “tags” field was available when looking at the PNG file. However, the user may add a “Description” in a multiline text field. Tags metadata on images and not MS Word files? Odd.
Future State (?): If the Cloud Ingestion process can perform an “Extracted Text” process, it may also have other “Ingestion services”, such as “Facial Recognition” from “known good” faces already tagged. e.g. I tag a face from within the OneDrive browser UI, and now when other images are ingested, there can be a correlation between the files.
As a business model, are we going to add a tier just after Cloud File ingestion, maybe exercise a third party suite of cognitive APIs, such as facial recognition? For example, Microsoft OneDrive Ingests a file, and if it’s an image file, routes through to the appropriate IBM Watson API, processes the file, and returns [updated] metadata, and a modified file? Maybe.
Update: Auto Tagging Objects Upon Ingestion
On an image with no tags, I selected the “Edit tags” menu from the Properties pane on the right side of the screen. As a scrolling menu, the option to “Add existing tag” appeared. There were dozens of tags already created with a word, thumbnail image, and the number of times used. Wow. Awesome. The current implementation seems to automatically, upon ingestion, identify objects in the image, and tag the images with those objects, e.g. Building, Beach, Horse, etc.
Presumption that Microsoft OneDrive performs object recognition on images upon file ingestion into the cloud (as opposed to in the Photos app).
“Extracted Text ” Metadata Field from within Microsoft OneDrive Image PNG File Properties:
Presumption that Microsoft OneDrive performs OCR on images upon file ingestion into the cloud (as opposed to the Photos app).