Tag Archives: Apple

AI Digital Assistant verse Search Engines

Aren’t AI Digital Assistants just like Search Engines? They both try to recognize your question or human utterance as best as possible to serve up your requested content. E.g.classic FAQ. The difference in the FAQ use case is the proprietary information from the company hosting the digital assistant may not be available on the internet.

Another difference between the Digital Assistant and a Search Engine is the ability of the Digital Assistant to ‘guide’ a person through a series of questions, enabling elaboration, to provide the user a more precise answer.

The Digital Assistant may use an interactive dialog to guide the user through a process, and not just supply the ‘most correct’ responses. Many people have flocked to YouTube for instructional type of interactive medium. When multiple workflow paths can be followed, the Digital Assistant has the upper hand.

The Digital Assistant has the capability of interfacing with 3rd parties (E.g. data stores with API access). For example, there may be a Digital Assistant hosted by Medical Insurance Co that has the ability to not only check the status of a claim, but also send correspondence to a medical practitioner on your behalf. A huge pain to call the insurance company, then the Dr office, then the insurance company again. Even the HIPPA release could be authenticated in real time, in line during the chat.  A digital assistant may be able to create a chat session with multiple participants.

Digital Assistants overruling capabilities over Search Engines are the ability to ‘escalate’ at any time during the Digital Assistant interaction. People are then queued for the next available human agent.

There have been attempts in the past, such as Ask.com (originally known as Ask Jeeves) is a question answering-focused e-business.  Google Questions and Answers (Google Otvety, Google Ответы) was a free knowledge market offered by Google that allowed users to collaboratively find good answers, through the web, to their questions (also referred as Google Knowledge Search).

My opinions are my own, and do not reflect my employer’s viewpoint.

AI Personal Assistant Needs Remedial Guidance for their Users

Providing Intelligent ‘Code’ Completion

At this stage in the application platform growth and maturity of the AI Personal Assistant, there are many commands and options that common users cannot formulate due to a lack of knowledge and experience .

A key usability feature for many integrated development environments (IDE) are their capability to use “Intelligent Code Completion” to guide their programmers to produce correct, functional syntax. This feature also enables the programmer to be unburdened by the need to look up syntax for each command reference, saving significant time.  As the usage of the AI Personal Assistant grows, and their capabilities along with it, the amount of “command and parameters” knowledge required to use the AI Personal Assistant will also increase.

AI Leveraging Intelligent Command Completion

For each command parameter [level\tree], a drop down list may appear giving users a set of options to select for the next parameter. A delimiter such as a period(.) indicates to the AI Parser another set of command options must be presented to the person entering the command. These options are typically in the form of drop down lists concatenated to the right of the formulated commands.

AI Personal Assistant Language Syntax

Adding another AI parser on top of the existing syntax parser may allow commands like these to be executed:

  • Abstraction (e.g. no application specified)
    • Order.Food.Focacceria.List123
    • Order.Food.FavoriteItalianRestaurant.FavoriteLunchSpecial
  • Application Parser
    • Seamless.Order.Food.Focacceria.Large Pizza

These AI command examples uses a hierarchy of commands and parameters to perform the function. One of the above commands leverages one of my contacts, and a ‘List123’ object.  The ‘List123’ parameter may be a ‘note’ on my Smartphone that contains a list of food we would like to order. The command may place the order either through my contact’s email address, fax number, or calling the business main number and using AI Text to Speech functionality.

All personal data, such as Favorite Italian Restaurant,  and Favorite Lunch Special could be placed in the AI Personal Assistant ‘Settings’.  A group of settings may be listed as Key-Value pairs,  that may be considered short hand for conversations involving the AI Assistant.

A majority of users are most likely unsure of many of the options available within the AI Personal assistant command structure. Intelligent command [code] completion empowers users with visibility into the available commands, and parameters.

For those without a programming background, Intelligent “Command” Completion is slightly similar to the autocomplete in Google’s Search text box, predicting possible choices as the user types. In the case of the guidance provided by an AI Personal Assistant the user is guided to their desired command; however, the Google autocomplete requires some level or sense of the end result command. Intelligent code completion typically displays all possible commands in a drop down list next to the constructor period (.). In this case the user may have no knowledge of the next parameter without the drop down choice list.  An addition feature enables the AI Personal Assistant to hover over one of the commands\parameters to show a brief ‘help text’ popup.

Note, Microsoft’s Cortana AI assistant provides a text box in addition to speech input.  Adding another syntax parser could be allowed and enabled through the existing User Interface.  However, Siri seems to only have voice recognition input, and no text input.

Is Siri handling the iOS ‘Global Search’ requests ‘behind the scenes’?  If so, the textual parsing, i.e. the period(.) separator would work. Siri does provide some cursory guidance on what information the AI may be able to provide,  “Some things you can ask me:”

With only voice recognition input, use the Voice Driven Menu Navigation & Selection approach as described below.

Voice Driven, Menu Navigation and Selection

The current AI personal assistant, abstraction layer may be too abstract for some users.  The difference between these two commands:

  • Play The Rolling Stones song Sympathy for the Devil.
    • Has the benefit of natural language, and can handle simple tasks, like “Call Mom”
    • However, there may be many commands that can be performed by a multitude of installed platform applications.

Verse

  • Spotify.Song.Sympathy for the Devil
    • Enables the user to select the specific application they would like a task to be performed by.
  • Spotify Help
    • A voice driven menu will enable users to understand the capabilities of the AI Assistant.    Through the use of a voice interactive menu, users may ‘drill down’ to the action they desire to be performed. e.g. “Press # or say XYZ”
    • Optionally, the voice menu, depending upon the application, may have a customer service feature, and forward the interaction to the proper [calling or chat] queue.

Update – 9/11/16

  • I just installed Microsoft Cortana for iOS, and at a glance, the application has a leg up on the competition
    • The Help menu gives a fair number of examples by category.  Much better guidance that iOS / Siri 
    • The ability to enter\type or speak commands provides the needed flexibility for user input.
      • Some people are uncomfortable ‘talking’ to their Smartphones.  Awkward talking to a machine.
      • The ability to type in commands may alleviate voice command entry errors, speech to text translation.
      • Opportunity to expand the AI Syntax Parser to include ‘programmatic’ type commands allows the user a more granular command set,  e.g. “Intelligent Command Completion”.  As the capabilities of the platform grow, it will be a challenge to interface and maximize AI Personal Assistant capabilities.

AI Personal Assistants are “Life Partners”

Artificial Intelligent (AI)  “Assistants”, or “Bots” are taken to the ‘next level’ when the assistant becomes a proactive entity based on the input from human intelligent experts that grows with machine learning.

Even the implication of an ‘Assistant’ v.  ‘Life Partner’ implies a greater degree of dynamic, and proactive interaction.   The cross over to becoming ‘Life Partner’ is when we go ‘above and beyond’ to help our partners succeed, or even survive the day to day.

Once we experience our current [digital, mobile] ‘assistants’ positively influencing our lives in a more intelligent, proactive manner, an emotional bond ‘grows’, and the investment in this technology will also expand.

Practical Applications Range:

  • Alcoholics Anonymous Coach , Mentor – enabling the human partner to overcome temporary weakness. Knowledge,  and “triggers” need to be incorporated into the AI ‘Partner’;  “Location / Proximity” reminder if person enters a shopping area that has a liquor store.  [AI] “Partner” help “talk down”
  • Understanding ‘data points’ from multiple sources, such as alarms,  and calendar events,  to derive ‘knowledge’, and create an actionable trigger.
    • e.g. “Did you remember to take your medicine?” unprompted; “There is a new article in N periodical, that pertains to your medicine.  Would you like to read it?”
    • e.g. 2 unprompted, “Weather calls for N inches of Snow.  Did you remember to service your Snow Blower this season?”
  • FinTech – while in department store XYZ looking to purchase Y over a certain amount, unprompted “Your credit score indicates you are ‘most likely’ eligible to ‘sign up’ for a store credit card, and get N percentage off your first purchase”  Multiple input sources used to achieve a potential sales opportunity.

IBM has a cognitive cloud of AI solutions leveraging IBM’s Watson.  Most/All of the 18 web applications they have hosted (with source) are driven by human interactive triggers, as with the “Natural Language Classifier”, which helps build a question-and-answer repository.

There are four bits that need to occur to accelerate adoption of the ‘AI Life Partner’:

  1. Knowledge Experts, or Subject Matter Experts (SME) need to be able to “pass on” their knowledge to build repositories.   IBM Watson Natural Language Classifier may be used.
  2. The integration of this knowledge into an AI medium, such as a ‘Digital Assistant’ needs to occur with corresponding ‘triggers’ 
  3. Our current AI ‘Assistants’ need to become [more] proactive as they integrate into our ‘digital’ lives, such as going beyond the setting of an alarm clock, hands free calling, or checking the sports score.   Our [AI] “Life Partner” needs to ‘act’ like buddy and fan of ‘our’ sports team.  Without prompting, proactively serve up knowledge [based on correlated, multiple sources], and/or take [acceptable] actions.
    1. E.g. FinTech – “Our schedule is open tonight, and there are great seats available, Section N, Seat A for ABC dollars on Stubhub.  Shall I make the purchase?”
      1. Partner with vendors to drive FinTech business rules.
  4. Take ‘advantage’ of more knowledge sources, such as the applications we use that collect our data.  Use multiple knowledge sources in concert, enabling the AI to correlate data and propose ‘complex’ rules of interaction.

Our AI ‘Life Partners’ may grow in knowledge, and mature the relationship between man and machine.   Incorporating derived rules leveraging machine learning, without input of a human expert, will come with risk and reward.

Apple iOS Email: Boldly Building an AI Rules Engine

When selecting the ‘flag’ option on an email, one of the menu options shown is ‘Notify Me…’  When anyone replies to that email thread, the person/me is notified.

This Apple iOS email feature, ‘Notify Me…” seems like a toe dip into an AI Email Rules Engine with the one condition and without customization. Is a full blown engine in the Apple product roadmap akin to Outlook?  Has this feature been ‘out there’ for awhile, and I just missed it?

Regardless, a more powerful, robust AI Rules engine, yet keeping the iOS simple, and elegant design could enhance business savvy user’s experience.

Notify Me Feature
Notify Me Feature

Entertainment Portals: Streaming VOD and Live Broadcasts, Games, News

Netflix is a subscription-based film and television program rental service that offers media to subscribers via Internet streaming.

Amazon Instant Video is an Internet video on demand service. It offers television shows and films for rental or purchase.  Selected titles offered free to customers with Amazon Prime subscription.

Bland definitions of what is formulating to be entertainment portals, encompassing multiple media types:

  • Games
  • Movies
  • Music
  • Photos
  • News
  • Social [Platform Integration]
  • Television
  • YouTube
Entertainment Portals:

All or some of the above media types, licensed for distribution,  are served through one or more portals.

Licensing content to be offered across several platforms requires a robust DAM.  Digital asset management (DAM) consists of management tasks and decisions surrounding the ingestion, annotation, cataloguing, storage, retrieval and distribution of digital assets.  The DAM products/processes looks like it will continue to bloom as distribution models are ‘experimented’ by the providers

  • Amazon [Instant]
  • Apple ecosphere
  • AOL
  • Cablevision – Optimum
  • Facebook [social]
  • G+ [social]
  • MSN
  • Netflix
  • ReMake – a fictitious Entertainment portal
    • the project team iterates through user design input, and remakes the UI, [and Workflow]  bi-weekly based on consumer feedback
  • Twitter [social]
  • Verizon FiOS
  • Yahoo
Segmented portals, containing one or two media types
  • Music and Music Games,  name that tune;
Industry Standards for Interfaces to/from Entertainment Portals
  • Search Catalog [by …]
    • API returns ‘Stream able’ / Playable URL for a VOD or Broadcast feed.

Companies Turn Toward “Data Sifters” & “Data Banks” to Commoditize on ‘Smart Object’ Data

Anyone who is anti “Big Brother”, this may not be the article for you, in fact skip it. 🙂
In the not so distant future, “Data Sifter” companies consisting of Subject Matter Experts (SME) across all verticals,  may process your data feeds collected from ‘smart objects’.   Consumers will be encouraged to submit their Smart data to ‘data sifters’ who will offer incentives such as a reduction of insurance premiums.
Everything from activity trackers, home automation, to vehicular automation data may be captured and aggregated.    The data collected can then be sliced and diced to provide macro and micro views of the information.    On the abstract, or macro level the information may allow for demographic, statistical correlations, which may contribute to corporate strategy.
On a granular view, the data will provide “data sifters” the opportunity to sift through ‘smart’ object data to perform analysis, and correlations that lead to actionable information.
Is it secure?  Do you care if a hacker steals your weight loss information?  In fact, you might feel more nervous if only the intended parties are allowed to collect the information. Collected ‘Smart Object’ data enables SMEs to correlate the data into:
  • Canned, ‘intelligent’ reports targeted to specific subject matter, or across silos of data
  • ‘Universes’ (i.e.  Business Objects) of data that may be ‘mined’ by consumer approved, ‘trusted’ third party companies, e.g. your insurance companies.
  • Actionable information based on AI subject matter rules engines

Consumers, people, may have the option of sharing their personal data with specific companies  by proxy, through a ‘data bank’ down to the data point collected   The sharing of personal data or information:

  1. may lower [or raise] your insurance premiums
  2. provide discounts on preventive health care products and services, e.g. vitamins to yoga classes
  3. Targeted, affordable,  medicine that may redirect the choice of the doctor to an alternate.  The MD would be contacted to validate the alternate.

The ‘smart object’ data collected may be harnessed by thousands of affinity groups to provide very discrete products and services.  The power of this collected ‘smart data’ and correlated information stretches beyond any consumer relationship experienced today.

At some point, health insurance companies may require you to wear a tracker to increase or slash premiums.  Auto Insurance companies may offer discounts for access to car smart data to make sure suggested maintenance guidelines for service are met.

You may approve your “data bank” to give access to specific soliciting government agencies or private research firms looking to analyze data for their studies. You may qualify based on the demographic, abstracted data points collected.   Incentives provided may be tax credits, or paying studies.

‘Smart Object’ Adoption and Affordability

If ‘Smart Objects’, Internet of Things (IoT) enabled, are cost inhibiting.  here are a few ways to increase their adoption:
  1.  [US] tax coupons to enable the buyer, at the time of purchase, to save money.  For example, a 100 USD discount applied at the time of purchase of an Activity Tracker, with the stipulation that you may agree,  at some point, to participate in a study.
  2. Government subsidies: the cost of ‘Smart Objects’ through annual tax deductions.  Today, tax incentives may allow you to purchase a ‘Smart Object’ if the cost is an itemized medical tax deduction, such as an Activity Tracker that monitors your heart rate, if your medical condition requires it.
  3. Auto, Life, Homeowners, and Health policy holders may qualify for additional insurance deductions
  4. Affinity branded ‘Smart Objects’ , such as American Lung Association may sell a logo branded Activity Tracker.  People may sponsor the owner of the tracking pedometer to raise funds for the cause.
The World Bank has a repository of data, World DataBank, which seems to store a large depth of information:
World Bank Open Data: free and open access to data about development in countries around the globe.”
Here is the article that inspired me to write this article:
Smart Object Data Ecosystem
Smart Object Data Ecosystem

CES 2013 Show: Huawei, and iPhone 5S/U, or U for Unsatisfied

At the show they had a red phone under a glass case,and it looked top secret.  At first glance, when you approached the booth, the sales team seemed on the defensive about their product, and their placement in the marketplace, in the same arena as Samsung.  As the conversation progressed, a more relaxed approached began to take place, and they even took their phone out of the glass case for me.  I must say that they seem to be trying to bring their A Game with a Quad Core processor, Al be it 1.5 GHz, it was still an impressive device, and the specs may be found here.

We had a candid conversation, and I said to play in the global markets, they need to break the 4 GB barrier.  Apple has now stunned the mobile community with the 64 Bit processor to get ready to raise the roof on memory.  However, anyone who understands addressing, the 64 bit addresses will each take up more memory than their 32 bit counterpart, e.g. takes more memory to run each application, but if you have raised the amount of memory on the device, no problem.  Unfortunately, Apple iPhone 5S has not raised the memory but has implied the 64 bit processor is the first step, getting their OS, and developers ready to manage more memory.  I think it might have been in the cards but it’s too late.

Consequence, fewer applications may run in a 64 bit addressable processor without running out of memory in multi-threaded mode.  Here is an example register under the Windows Chip Architecture, not exactly apples and oranges, but the analogy is similar.

Win 32 v. 64

 

As a side note, back to the Asia market, here is an infographic which iterates through the opportunities in those consumer markets. Very interesting.

APJE Smartphone Vendor

 

Report: Apple testing 64-bit iPhone processor with a “motion tracking” chip — Tech News and Analysis

Report: Apple testing 64-bit iPhone processor with a “motion tracking” chip — Tech News and Analysis.

The 64 Bit Chip would certainly make the mobile race interesting. Having a like for like comparison of processing speed, with a similar app on Android, Apple, and Microsoft Mobile operating systems would be ideal, and the results published.

[dfads params=’groups=1177,1178&limit=1&orderby=random’]

Maybe It’s Just My Gym? Where is my Digital Workout Assistant?

I just got back from my gym, and there were brand new machines.  So I was wondering where my digital workout assistant was on the machine.

RF tag on each weight, communicates with another RF tag on the machine with a UPC reader, and WiFi to the main hub so the gym can track your progress.

I envisioned a person walking up to a machine, passing their gym ID tag across the UPC machine, which registers them to ‘check in’ to the screen.  Then the person working out will setup their weight capacity, and start their reps.  Once the reps start, the RF tag on the maximum weight lifted, and passes that packed of data to the exercise machine’s main system, which stores all of their data until they ‘Check Out’ with their GYM ID to the UPC.  Also, if the person forgets to ‘check out’ either with the next ‘check in’ or with a timeout of inactivity, the data is sent to the main gymnasium via the WiFi, and optionally your Smartphone via an option NFC component on their workout machine, or through the main terminal once they are done with activity, and ‘Check Out’ from the gym.

This wil allow you to track your workout, get a pictorial view of how you are progressing, how often, as well the Gym may recommend, dynamically, how to adjust your plan based on your targeted goals for exercise.  But, maybe, just maybe it’s only my gym.

 

Bubble Head Bob Takes Dictation: Human Factors to Voice Recognition

As I continue to ponder why I am so adverse to text to speech, voice recognition, other than the issues I have with its inconsistent accuracy, needing to speak slowly at times, as well as crystal clearly articulate every word, without deviation, barring those issues, I still am having trouble relating to a machine.  Hold the jokes from the peanut gallery.  Yes, my Android phone, I find it difficult to talk to an app with a microphone on a screen, or an image of a piece of paper on a technical device.  Frankly, if the technical  issues went away, maybe, just maybe, I might talk to my phone.  I may be xenophobic to androids or a computer, robot, taking dictation.  I’ll have to add that to the list to talk to my therapist.  Is there an application anyone knows that has a bobble head, of various faces that will ask the questions, take, and repeat your last phrase to confirm, such as did you say, ‘that’ or ‘fat’, I don’t understand the context of the sentence. 

Add, a familiar, animated face to whom you would speak, as well as periodic word and sentence validation feature to Voice Recognition, text to speech dictation applications, and there might be more acceptance of these applications.  Too soon, too quick?