Category Archives: Education

Hey Siri, Ready for an Antitrust Lawsuit Against Apple? Guess Who’s Suing.

The AI personal assistant with the “most usage” spanning  connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives.  An Amazon commercial just aired which depicted  a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her.  The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.

What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:

  • Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
  • Proactive, suggestive actions  may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.

Three main sources of AI Personal Assistant value add:

  • A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ;   in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted.  In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
  • An abstraction layer between the user and their apps;  The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
  • User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day  rituals.

Proactive and chatty assistants may win the “Assistant of Choice” on all platforms.  Being proactive means collecting data more often then when it’s just you asking questions ADHOC.  Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location.  E.g. “Don’t forget milk” says Siri,  as your passing the grocery store.  Around the time I leave work Google maps tells me if I have traffic and my ETA.

It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.

Microsoft Corona has an iOS app and widget, which is wrapped around the OS.  Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.

Antitrust violation by mobile smartphone maker Apple:  iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri,  “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.

Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default.  Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.

ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution.  Has Apple has taken steps to tightly couple Siri with it’s iOS?

AI Personal Assistant ‘Wish” list:

  • Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions.   The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps.  After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
    • The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”.  There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders.  In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
  • Capability to interact with IoT using user defined workflows.  Hardware and software may exist in the Cloud.
  • Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.

Apple could already be making the changes as a natural course of their product evolution.  Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.

Cloud Storage: Ingestion, Management, and Sharing

Cloud Storage Solutions need differentiation that matters, a tipping point to select one platform over the other.

Common Platforms Used:

Differentiation may come in the form of:

  • Collaborative Content Creation Software, such as DropBox Paper enables individuals or teams to produce content, all the while leveraging the Storage platform for e.g. version control,
  • Embedded integration in a suite of content creation applications, such as Microsoft Office, and OneDrive.
  • Making the storage solution available to developers, such as with AWS S3, and Box.  Developers may create apps powered by the Box Platform or custom integrations with Box
  • iCloud enables users to backup their smartphone, as well tightly integrating with the capture and sharing of content, e.g. Photos.

Cloud Content Lifecycle Categories:

  • Content Creation
    • 3rd Party (e.g. Camera) or Integrated Platform Products
  • Content Ingestion
    • Capture Content and Associated Metadata
  • Content Collaboration
    • Share, Update and Distribution
  • Content Discovery
    • Surface Content; Searching and Drill Down
  • Retention Rules
    • Auto expire pointer to content, or underlying content

Cloud Content Ingestion Services:

Cloud Ingestion Services
Cloud Ingestion Services

Applying Gmail Labels Across All Google Assets: Docs, Photos, Contacts + Dashboard, Portal View

Google applications contain [types of] assets,  either created within the application, or imported into the application.    In Gmail, you have objects, emails, and Gmail enables users to add metadata to the email in the form of tags or “Labels”.  Labeling emails is a very easy way to organize these assets, emails.   If you’re a bit more organized, you may even devise a logical taxonomy to classify your emails.

An email can also be put into a folder and this is completely different than what we are talking about with labels.  An email may be placed into a folder, and have a parent child folder hierarchy.  Only the name of the folder, and it’s correlations to positions in the hierarchy provide this relational metadata.

For personal use, or for small to medium size businesses, users may want to categorize  all of the Google “objects” from each Google App,  so why Isn’t there the capability to apply labels across all Google App assets?  If you work at a law firm, for example, and have documents in Google Docs, and use Google for email, it would be ideal to leverage a company wide taxonomy, and upon any internal search discover all objects logically grouped in a container by labels.

For each Google object asset, such as email in Gmail, users may apply N number of labels to each Google Object asset.

A [Google] dashboard, or portal view may be used to display and access Google assets across Google applications, grouped by Labels .  A Google Apps “Portal Search” may consist of queries that contain asset labels.  A  relational, Google object repository containing assets across all object types (e.g. Google Docs), may be leveraged to store metadata about each Google asset and their relationships.

A [Google] dashboard, or portal view may be organized around individuals (e.g. personal), teams, or an organization.  So, in a law firm, for example, a case number label could be applied to Google Docs,  Google Photos (i.e. Photos and Videos),  and of course, Gmail.

A relatively simple feature to be implemented with a lot of value for Google’s clients, us?  So, why isn’t it implemented?

One better, when we have facial recognition code implemented in Photos (and Videos), applying Google labels to media assets may allow for correlation of Emails to Photos with a rule based engine.

The Google Search has expanded into the mobile Google app.

Leveraging Google “Cards“, developers may create “Cards” for a single or group of Google assets.   Grouping of Google assets may be applied using “Labels”.   As Google assets go through a business or personal user workflow, additional metadata may be added to the asset, such as additional “Labels”.

Expanding upon this solution,  scripts may be created to “push” assets through a workflow, perhaps using Google Cloud Functions.  Google “Cards” may be leveraged as “the bit” that informs users when they have new items to process in a workflow.

Metadata, or Labels, may be used such as “Document Ready for Legal Review” or “Legal Document Review Completed”.

AI Assistant Summarizing Email Threads and Complex Documents

“Give me the 50k foot level on that topic.”
“Just give us the cliff notes.”
“Please give me the bird’s eye view.”

AI Email Thread Abstraction and Summarization

A daunting, and highly public email has landed in your lap..top to respond.  The email thread goes between over a dozen people all across the globe.  All of the people on the TO list, and some on the CC list, have expressed their points about … something.  There are junior technical and very senior business staff on the email.  I’ll need to understand the email thread content from the perspective of each person that replied to the thread.  That may involve sifting through each of the emails on the thread.  Even though the people on the emails are English fluent, their response styles may be different based on culture, or seniority of staff (e.g. abstractly written).  Also, the technical folks might want to keep the conversation of the email granular and succinct.
Let’s throw a bit of [AI] automation at this problem.
Another step in our AI personal assistant evolution, email thread aggregation and summarization utilizing cognitive APIs | tools such as what IBM Watson has implemented with their Language APIs.  Based on the documentation provided by their APIs, the above challenges can be resolved for the reader.   A suggestion to an IBM partner for the Watson Cognitive cloud, build an ’email plugin’ if the email product exposes their solution to customization.
A plugin built on top of an email application, flexible enough to allow customization, may be a candidate for Email Thread aggregation and summarization.  Email clients may include IBM Notes, Gmail, (Apple) Mail, Microsoft Outlook, Yahoo! Mail, and OpenText FirstClass.
Add this capability to the job description of AI assistants, such as Cortana, Echo, Siri, and Google Now.   In fact, this plug-in may not need the connectivity and usage of an AI assistant, just the email plug-in interacting with a suite of cognitive cloud API calls.

AI Document Abstraction and Summarization

A plug in may also be created for word processors such as Microsoft Word.   Once activated within a document, a summary page may be created and prefixed to the existing document. There are several use cases, such as a synopsis of the document.
With minimal effort from human input, marking up the content, we would still be able to derive the  contextual metadata, and leverage it to create new sentences, paragraphs of sentences.
Update:
I’ve not seen an AI Outlook integration in the list of MS Outlook Add-ins that would bring this functionality to users.

Building AI Is Hard—So Facebook Is Building AI That Builds AI

“…companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art [AI] forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.”


This article represents a true picture of where we are today for the average consumer and producer of information, and the companies that repurpose information, e.g. in the form of advertisements.  
The advancement and current progress of Artificial Intelligence, Machine Learning, analogously paints a picture akin to the 1970s with computers that fill rooms, and accept punch cards as input.
Today’s consumers have mobile computing power that is on par to the whole rooms of the 1970s; however, “more compute power” in a tinier package may not be the path to AI sentience.  How AI algorithm models are computed might need to take an alternate approach.  
In a classical computation system, a bit would have to be in one state or the other. However quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
The construction, and validation of Artificial Intelligence, Machine Learning, algorithm models should be engineered on a Quantum Computing framework.

AI Personal Assistants are “Life Partners”

Artificial Intelligent (AI)  “Assistants”, or “Bots” are taken to the ‘next level’ when the assistant becomes a proactive entity based on the input from human intelligent experts that grows with machine learning.

Even the implication of an ‘Assistant’ v.  ‘Life Partner’ implies a greater degree of dynamic, and proactive interaction.   The cross over to becoming ‘Life Partner’ is when we go ‘above and beyond’ to help our partners succeed, or even survive the day to day.

Once we experience our current [digital, mobile] ‘assistants’ positively influencing our lives in a more intelligent, proactive manner, an emotional bond ‘grows’, and the investment in this technology will also expand.

Practical Applications Range:

  • Alcoholics Anonymous Coach , Mentor – enabling the human partner to overcome temporary weakness. Knowledge,  and “triggers” need to be incorporated into the AI ‘Partner’;  “Location / Proximity” reminder if person enters a shopping area that has a liquor store.  [AI] “Partner” help “talk down”
  • Understanding ‘data points’ from multiple sources, such as alarms,  and calendar events,  to derive ‘knowledge’, and create an actionable trigger.
    • e.g. “Did you remember to take your medicine?” unprompted; “There is a new article in N periodical, that pertains to your medicine.  Would you like to read it?”
    • e.g. 2 unprompted, “Weather calls for N inches of Snow.  Did you remember to service your Snow Blower this season?”
  • FinTech – while in department store XYZ looking to purchase Y over a certain amount, unprompted “Your credit score indicates you are ‘most likely’ eligible to ‘sign up’ for a store credit card, and get N percentage off your first purchase”  Multiple input sources used to achieve a potential sales opportunity.

IBM has a cognitive cloud of AI solutions leveraging IBM’s Watson.  Most/All of the 18 web applications they have hosted (with source) are driven by human interactive triggers, as with the “Natural Language Classifier”, which helps build a question-and-answer repository.

There are four bits that need to occur to accelerate adoption of the ‘AI Life Partner’:

  1. Knowledge Experts, or Subject Matter Experts (SME) need to be able to “pass on” their knowledge to build repositories.   IBM Watson Natural Language Classifier may be used.
  2. The integration of this knowledge into an AI medium, such as a ‘Digital Assistant’ needs to occur with corresponding ‘triggers’ 
  3. Our current AI ‘Assistants’ need to become [more] proactive as they integrate into our ‘digital’ lives, such as going beyond the setting of an alarm clock, hands free calling, or checking the sports score.   Our [AI] “Life Partner” needs to ‘act’ like buddy and fan of ‘our’ sports team.  Without prompting, proactively serve up knowledge [based on correlated, multiple sources], and/or take [acceptable] actions.
    1. E.g. FinTech – “Our schedule is open tonight, and there are great seats available, Section N, Seat A for ABC dollars on Stubhub.  Shall I make the purchase?”
      1. Partner with vendors to drive FinTech business rules.
  4. Take ‘advantage’ of more knowledge sources, such as the applications we use that collect our data.  Use multiple knowledge sources in concert, enabling the AI to correlate data and propose ‘complex’ rules of interaction.

Our AI ‘Life Partners’ may grow in knowledge, and mature the relationship between man and machine.   Incorporating derived rules leveraging machine learning, without input of a human expert, will come with risk and reward.

The Race Is On to Control Artificial Intelligence, and Tech’s Future

Amazon, Google, IBM and Microsoft are using high salaries and games pitting humans against computers to try to claim the standard on which all companies will build their A.I. technology.

In this fight — no doubt in its early stages — the big tech companies are engaged in tit-for-tat publicity stunts, circling the same start-ups that could provide the technology pieces they are missing and, perhaps most important, trying to hire the same brains.

For years, tech companies have used man-versus-machine competitions to show they are making progress on A.I. In 1997, an IBM computer beat the chess champion Garry Kasparov. Five years ago, IBM went even further when its Watson system won a three-day match on the television trivia show “Jeopardy!” Today, Watson is the centerpiece of IBM’s A.I. efforts.

Today, only about 1 percent of all software apps have A.I. features, IDC estimates. By 2018, IDC predicts, at least 50 percent of developers will include A.I. features in what they create.

Source: The Race Is On to Control Artificial Intelligence, and Tech’s Future – The New York Times

The next “tit-for-tat” publicity stunt should most definitely be a battle with robots, exactly like BattleBots, except…

  1. Use A.I. to consume vast amounts of video footage from previous bot battles, while identifying key elements of bot design that gave a bot the ‘upper hand’.  From a human cognition perspective, this exercise may be subjective. The BattleBot scoring process can play a factor in 1) conceiving designs, and 2) defining ‘rules’ of engagement.
  2. Use A.I. to produce BattleBot designs for humans to assemble.
  3. Autonomous battles, bot on bot, based on Artificial Intelligence battle ‘rules’ acquired from the input and analysis of video footage.