Category Archives: Finance

Cryptocurrency + Quantum Computing := Encryption Fail, The Next Y2K

Over the last several months I’ve been researching Quantum Computing (QC) and trying to determine how far we’ve come from the theoretical to the practical implementation.  It seems we are in the early commercial prototypical phase.

Practical Application of QC

The most discussed application of Quantum Computing has been to crack encryption.  Encrypted data that may take months or years to decipher given our current supercomputing capabilities, may take hours or minutes when the full potential of Quantum Computing has been realized.

Bitcoin and Ethereum Go Boom

One source paraphrased: Once quantum computing is actualized, encryption will be in lockstep progress, and a new cryptology paradigm will be implemented to secure our data. This kind of optimism has no place in the “Real World”. and most certainly not in the world financial markets.   Are there hedge funds which rightfully hedge against the cryptocurrency / QC risk paradigm?

Where is the Skepticism?

Is there anyone researching next steps in the evolution of cryptography/encryption, hedging the risk that marketplace encryption will be ready? The lack of fervor in the development of “Quantum Computing Ready” encryption has me speechless. Government organizations like DARPA / SBIR should already be at a conceptual level if not at the prototypical phase with next-generation cryptology.

Too Many Secrets

Sneakers“, a classic fictional action movie with a fantastic cast, and its plot, a mathematician in secret develops the ultimate code-breaking device, and everyone is out to possess the device.  An excellent movie soon to be non-fictional..?

References:

 

People Turn Toward “Data Banks” to Commoditize on their Purchase and User Behavior Profiles

Anyone who is anti “Big Brother”, this may not be the article for you, in fact, skip it. 🙂

 

The Pendulum Swings Away from GDPR

In the not so distant future, “Data Bank” companies consisting of Subject Matter Experts (SME) across all verticals,  may process your data feeds collected from your purchase and user behavior profiles.  Consumers will be encouraged to submit their data profiles into a Data Bank who will offer incentives such as a reduction of insurance premiums to cash back rewards.

 

Everything from activity trackers, home automation, to vehicular automation data may be captured and aggregated.    The data collected can then be sliced and diced to provide macro and micro views of the information.    On the abstract, macro level the information may allow for demographic, statistical correlations, which may contribute to corporate strategy. On a granular view, the data will provide “data banks” the opportunity to sift through data to perform analysis and correlations that lead to actionable information.

 

Is it secure?  Do you care if a hacker steals your weight loss information? May not be an issue if collected Purchase and Use Behavior Profiles aggregate into a Blockchain general ledger.  Data Curators and Aggregators work with SMEs to correlate the data into:

  • Canned, ‘intelligent’ reports targeted for a specific subject matter, or across silos of data types
  • ‘Universes’ (i.e.  Business Objects) of data that may be ‘mined’ by consumer approved, ‘trusted’ third party companies, e.g. your insurance companies.
  • Actionable information based on AI subject matter rules engines and consumer rule transparency may be provided.

 

 “Data Banks” may be required to report to their customers who agreed to sell their data examples of specific rows of the data, which was sold on a “Data Market”.

Consumers may have the option of sharing their personal data with specific companies by proxy, through a ‘data bank’ granular to the data point collected.  Sharing of Purchase and User Behavior Profiles:

  1. may lower [or raise] your insurance premiums
  2. provide discounts on preventive health care products and services, e.g. vitamins to yoga classes
  3. Targeted, affordable,  medicine that may redirect the choice of the doctor to an alternate.  The MD would be contacted to validate the alternate.

 

The curriated data collected may be harnessed by thousands of affinity groups to offer very discrete products and services.  Purchase and User Behavior Profiles,  correlated information stretches beyond any consumer relationship experienced today.

 

At some point, health insurance companies may require you to wear a tracker to increase or slash premiums.  Auto Insurance companies may offer discounts for access to car smart data to make sure suggested maintenance guidelines for service are met.

 

You may approve your “data bank” to give access to specific soliciting government agencies or private firms looking to analyze data for their studies. You may qualify based on the demographic, abstracted data points collected for incentives provided may be tax credits, or paying studies.

Purchase and User Behavior Profiles:  Adoption and Affordability

If ‘Data Banks’ are allowed to collect Internet of Things (IoT) device profile and the devices themselves are cost prohibitive.  here are a few ways to increase their adoption:

  1.  [US] tax coupons to enable the buyer, at the time of purchase, to save money.  For example, a 100 USD discount applied at the time of purchase of an Activity Tracker, with the stipulation that you may agree,  at some point, to participate in a study.
  2. Government subsidies: the cost of aggregating and archiving Purchase and Behavioral profiles through annual tax deductions.  Today, tax incentives may allow you to purchase an IoT device if the cost is an itemized medical tax deduction, such as an Activity Tracker that monitors your heart rate, if your medical condition requires it.
  3. Auto, Life, Homeowners, and Health policyholders may qualify for additional insurance deductions
  4. Affinity branded IoT devices, such as American Lung Association may sell a logo branded Activity Tracker.  People may sponsor the owner of the tracking pedometer to raise funds for the cause.

The World Bank has a repository of data, World DataBank, which seems to store a large depth of information:

World Bank Open Data: free and open access to data about development in countries around the globe.”

Here is the article that inspired me to write this article:

http://www.marketwatch.com/story/you-might-be-wearing-a-health-tracker-at-work-one-day-2015-03-11

 

Privacy and Data Protection Creates Data Markets

Initiatives such as General Data Protection Regulation (GDPR) and other privacy initiatives which seek to constrict access to your data to you as the “owner”, as a byproduct, create opportunities for you to sell your data.  

 

Blockchain: Purchase, and User Behavior Profiles

As your “vault”, “Data Banks” will collect and maintain your two primary datasets:

  1. As a consumer of goods and services, a Purchase Profile is established and evolves over time.  Online purchases are automatically collected, curated, appended with metadata, and stored in a data vault [Blockchain].  “Offline” purchases at some point, may become a hybrid [on/off] line purchase, with advances in traditional monetary exchanges, and would follow the online transaction model.
  2. User Behavior (UB)  profiles, both on and offline will be collected and stored for analytical purposes.  A user behavior “session” is a use case of activity where YOU are the prime actor.  Each session would create a single UB transaction and are also stored in a “Data Vault”.   UB use cases may not lead to any purchases.

Not all Purchase and User Behavior profiles are created equal.  Eg. One person’s profile may show a monthly spend higher than another.  The consumer who purchases more may be entitled to more benefits.

These datasets wholly owned by the consumer, are safely stored, propagated, and immutable with a solution such as with a Blockchain general ledger.

AI Email Workflows Eliminate Need for Manual Email Responses

When i read the article “How to use Gmail templates to answer emails faster.”  I thought wow, what an 1990s throwback!

Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.

Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.

Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.

Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.

Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.

Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors.  Workaround, of course, using service accounts with vastly higher folder quota / size.

My opinions do not reflect that of my employer.

Hey Siri, Ready for an Antitrust Lawsuit Against Apple? Guess Who’s Suing.

The AI personal assistant with the “most usage” spanning  connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives.  An Amazon commercial just aired which depicted  a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her.  The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.

What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:

  • Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
  • Proactive, suggestive actions  may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.

Three main sources of AI Personal Assistant value add:

  • A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ;   in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted.  In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
  • An abstraction layer between the user and their apps;  The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
  • User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day  rituals.

Proactive and chatty assistants may win the “Assistant of Choice” on all platforms.  Being proactive means collecting data more often then when it’s just you asking questions ADHOC.  Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location.  E.g. “Don’t forget milk” says Siri,  as your passing the grocery store.  Around the time I leave work Google maps tells me if I have traffic and my ETA.

It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.

Microsoft Corona has an iOS app and widget, which is wrapped around the OS.  Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.

Antitrust violation by mobile smartphone maker Apple:  iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri,  “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.

Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default.  Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.

ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution.  Has Apple has taken steps to tightly couple Siri with it’s iOS?

AI Personal Assistant ‘Wish” list:

  • Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions.   The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps.  After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
    • The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”.  There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders.  In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
  • Capability to interact with IoT using user defined workflows.  Hardware and software may exist in the Cloud.
  • Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.

Apple could already be making the changes as a natural course of their product evolution.  Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.

AI Personal Assistant Needs Remedial Guidance for their Users

Providing Intelligent ‘Code’ Completion

At this stage in the application platform growth and maturity of the AI Personal Assistant, there are many commands and options that common users cannot formulate due to a lack of knowledge and experience.  Using Natural Language to formulate questions has gotten better over the years, but assistance / guidance formulating the requests would maximize intent / goal accuracy.

A key usability feature for many integrated development environments (IDE) are their capability to use “Intelligent Code Completion” to guide their programmers to produce correct, functional syntax. This feature also enables the programmer to be unburdened by the need to look up syntax for each command reference, saving significant time.  As the usage of the AI Personal Assistant grows, and their capabilities along with it, the amount of commands and their parameters required to use the AI Personal Assistant will also increase.

AI Leveraging Intelligent Command Completion

For each command parameter [level\tree], a drop down list may appear giving users a set of options to select for the next parameter. A delimiter such as a period(.) indicates to the AI Parser another set of command options must be presented to the person entering the command. These options are typically in the form of drop down lists concatenated to the right of the formulated commands.  Vocally, parent / child commands and parameters may be supplied in a similar fashion.

AI Personal Assistant Language Syntax

Adding another AI parser on top of the existing syntax parser may allow commands like these to be executed:

  • Abstraction (e.g. no application specified)
    • Order.Food.Focacceria.List123
    • Order.Food.FavoriteItalianRestaurant.FavoriteLunchSpecial
  • Application Parser
    • Seamless.Order.Food.Focacceria.Large Pizza

These AI command examples uses a hierarchy of commands and parameters to perform the function. One of the above commands leverages one of my contacts, and a ‘List123’ object.  The ‘List123’ parameter may be a ‘note’ on my Smartphone that contains a list of food we would like to order. The command may place the order either through my contact’s email address, fax number, or calling the business main number and using AI Text to Speech functionality.

All personal data, such as Favorite Italian Restaurant,  and Favorite Lunch Special could be placed in the AI Personal Assistant ‘Settings’.  A group of settings may be listed as Key-Value pairs,  that may be considered short hand for conversations involving the AI Assistant.

A majority of users are most likely unsure of many of the options available within the AI Personal assistant command structure. Intelligent command [code] completion empowers users with visibility into the available commands, and parameters.

For those without a programming background, Intelligent “Command” Completion is slightly similar to the autocomplete in Google’s Search text box, predicting possible choices as the user types. In the case of the guidance provided by an AI Personal Assistant the user is guided to their desired command; however, the Google autocomplete requires some level or sense of the end result command. Intelligent code completion typically displays all possible commands in a drop down list next to the constructor period (.). In this case the user may have no knowledge of the next parameter without the drop down choice list.  An addition feature enables the AI Personal Assistant to hover over one of the commands\parameters to show a brief ‘help text’ popup.

Note, Microsoft’s Cortana AI assistant provides a text box in addition to speech input.  Adding another syntax parser could be allowed and enabled through the existing User Interface.  However, Siri seems to only have voice recognition input, and no text input.

Is Siri handling the iOS ‘Global Search’ requests ‘behind the scenes’?  If so, the textual parsing, i.e. the period(.) separator would work. Siri does provide some cursory guidance on what information the AI may be able to provide,  “Some things you can ask me:”

With only voice recognition input, use the Voice Driven Menu Navigation & Selection approach as described below.

Voice Driven, Menu Navigation and Selection

The current AI personal assistant, abstraction layer may be too abstract for some users.  The difference between these two commands:

  • Play The Rolling Stones song Sympathy for the Devil.
    • Has the benefit of natural language, and can handle simple tasks, like “Call Mom”
    • However, there may be many commands that can be performed by a multitude of installed platform applications.

Verse

  • Spotify.Song.Sympathy for the Devil
    • Enables the user to select the specific application they would like a task to be performed by.
  • Spotify Help
    • A voice driven menu will enable users to understand the capabilities of the AI Assistant.    Through the use of a voice interactive menu, users may ‘drill down’ to the action they desire to be performed. e.g. “Press # or say XYZ”
    • Optionally, the voice menu, depending upon the application, may have a customer service feature, and forward the interaction to the proper [calling or chat] queue.

Update – 9/11/16

  • I just installed Microsoft Cortana for iOS, and at a glance, the application has a leg up on the competition
    • The Help menu gives a fair number of examples by category.  Much better guidance that iOS / Siri 
    • The ability to enter\type or speak commands provides the needed flexibility for user input.
      • Some people are uncomfortable ‘talking’ to their Smartphones.  Awkward talking to a machine.
      • The ability to type in commands may alleviate voice command entry errors, speech to text translation.
      • Opportunity to expand the AI Syntax Parser to include ‘programmatic’ type commands allows the user a more granular command set,  e.g. “Intelligent Command Completion”.  As the capabilities of the platform grow, it will be a challenge to interface and maximize AI Personal Assistant capabilities.

Microsoft Flow – Platform Review

It looks like Microsoft created a generic workflow platform, product independent.

Microsoft has software solutions, like MS Outlook with an [email] rules engine built into Outlook.  SharePoint has a workflow solution within the Sharepoint Platform, typically governing the content flowing through it’s system.

Microsoft Flow is a different animal.  It seems like Microsoft has built a ‘generic’ rules engine for processing almost any event.  The Flow product:

  1. Start using the product from one of two areas:  a) “My Flows” where I may view existing and create new [work]flows. b) “Activity”, that shows “Notifications” and “Failures”
  2. Select “My Flows”, and the user may “Create [a workflow] from Blank”,  or “Browse Templates”.  MSFT existing set of templates were created by Microsoft, and also by a 3rd party implying a marketplace.
  3. Select “Create from Blank” and the user has a single drop down list of events, a culmination events across Internet products. There is an implication there could be any product, and event “made compatible” with MSFT Flows.
    1. The drop down list of events has a format of “Product – Event”.  As the list of products and events grow, we should see at least two separate drop down lists, one for products, and a sub list for the product specific events.
    2. Several Example Events Include:
      1. “Dropbox – When a file is created”
      2. “Facebook – When there is a new post to my timeline”
      3. “Project Online – When a new task is created”
      4. “RSS – When a feed item is published”
      5. “Salesforce – When an object is created”
    3. The list of products as well as there events may need a business analyst to rationalize the use cases.
  4. Once an Event is selected, event specific details may be required, e.g. Twitter account details, or OneDrive “watch” folder
  5. Next, a Condition may be added to this [work]flow,  and may be specific to the Event type, e.g. OneDrive File Type properties [contains] XYZ value.  There is also an “advanced mode” using a conditional scripting language.
  6. There is “IF YES” and “IF NO” logic, which then allows the user to select one [or more] actions to perform
    1. Several Action Examples Include:
      1. “Excel – Insert Rows”
      2. “FTP – Create File”
      3. “Google Drive – List files in folder”
      4. “Mail – Send email”
      5. “Push Notification – Send a push notification”
    2. Again, it seems like an eclectic bunch of Products, Actions, and Events strung together to have a system to POC.
  7. The Templates list, predefined set of workflows that may be of interest to anyone who does not want to start from scratch.   The UI provides several ways to filter, list, and search through templates.

Applicable to everyday life, from an individual home user, small business, to the enterprise.  At this stage the product seems in Beta at best, or more accurately, just after clickable prototype.  I ran into several errors trying to go through basic use cases, i.e. adding rules.

Despite the “Preview” launch, Microsoft has showed us the power in [work]flow processing regardless of the service platform provider, e.g.  Box, DropBox, Facebook, GitHub, Instagram, Salesforce, Twitter, Google, MailChimp, …

Microsoft may be the glue to combine service providers who may / expose their services to MSFT Flow functionality.

Create from Blank - Select Condition
Create from Blank – Select Condition

 

Create Rule from Template
Create Rule from Template
Create from Blank Rule Building UI
Create from Blank Rule Building UI

 

Update June 28th, 2016:

Opportunities for Event, Condition, Action Rules

  • Transcoding [cloud] Services
  • [IBM Watson] Cognitive APIs
    • e.g. Language:Translation; E.g.2. Visual Recognition;
  • WordPress – Create a Post
    • New text file dropped in specific folder on Box, DropBox, etc. being ‘monitored’ by MSFT flow [?] Additional code required by user for ‘polling’ capabilities
    • OR new text file attached, and emailed to specific email account folder ‘watched’ by MSFT Flow.
    • Event triggers – Automatic read of new text file
      • stylizing may occur if HTML coding used
    • Action – Post to a Blog
  • ‘ANY’ Event occurs, a custom message is sent using Skype for a single or group of Skype accounts;
    • On several ‘eligible’ events, such as “File Creation” into Box,  the file (or file shared URL) may be sent to the Skype account.
  • ‘ANY’ Event occurs, a custom mobile text message is sent to a single or group of phone numbers.
  • Event occurs for “File Creation” e.g. into Box; after passing a “Condition”, actions occur:
    • IBM Watson Cognitive API, Text to Speech, occurs, and the product of the action is placed in the same Box folder.
  • Action: Using Microsoft Edge (powered by MSN), in the “My news feed” tab, enable action to publish “Cards”, such as app notifications

Challenges \ Opportunities \ Unknowns

  • 3rd party companies existing, published [cloud; web service] APIs may not even need any modification to integrate with Microsoft Flow; however, business approval may be required to use the API in this manner,
  • It is unclear re: Flow Templates need to be created by the product owner, e.g. Telestream, or knowledgeable third party, following the Android, iOS, and/or MSFT Mobile Apps model.
  • It is unclear if the MSFT Flow app may be licensed individually in the cloud, within the 365 cloud suite, or offered for Home and\or Business?

Applying Gmail Labels Across All Google Assets: Docs, Photos, Contacts + Dashboard, Portal View

Google applications contain [types of] assets,  either created within the application, or imported into the application.    In Gmail, you have objects, emails, and Gmail enables users to add metadata to the email in the form of tags or “Labels”.  Labeling emails is a very easy way to organize these assets, emails.   If you’re a bit more organized, you may even devise a logical taxonomy to classify your emails.

An email can also be put into a folder and this is completely different than what we are talking about with labels.  An email may be placed into a folder, and have a parent child folder hierarchy.  Only the name of the folder, and it’s correlations to positions in the hierarchy provide this relational metadata.

For personal use, or for small to medium size businesses, users may want to categorize  all of the Google “objects” from each Google App,  so why Isn’t there the capability to apply labels across all Google App assets?  If you work at a law firm, for example, and have documents in Google Docs, and use Google for email, it would be ideal to leverage a company wide taxonomy, and upon any internal search discover all objects logically grouped in a container by labels.

For each Google object asset, such as email in Gmail, users may apply N number of labels to each Google Object asset.

A [Google] dashboard, or portal view may be used to display and access Google assets across Google applications, grouped by Labels .  A Google Apps “Portal Search” may consist of queries that contain asset labels.  A  relational, Google object repository containing assets across all object types (e.g. Google Docs), may be leveraged to store metadata about each Google asset and their relationships.

A [Google] dashboard, or portal view may be organized around individuals (e.g. personal), teams, or an organization.  So, in a law firm, for example, a case number label could be applied to Google Docs,  Google Photos (i.e. Photos and Videos),  and of course, Gmail.

A relatively simple feature to be implemented with a lot of value for Google’s clients, us?  So, why isn’t it implemented?

One better, when we have facial recognition code implemented in Photos (and Videos), applying Google labels to media assets may allow for correlation of Emails to Photos with a rule based engine.

The Google Search has expanded into the mobile Google app.

Leveraging Google “Cards“, developers may create “Cards” for a single or group of Google assets.   Grouping of Google assets may be applied using “Labels”.   As Google assets go through a business or personal user workflow, additional metadata may be added to the asset, such as additional “Labels”.

Expanding upon this solution,  scripts may be created to “push” assets through a workflow, perhaps using Google Cloud Functions.  Google “Cards” may be leveraged as “the bit” that informs users when they have new items to process in a workflow.

Metadata, or Labels, may be used such as “Document Ready for Legal Review” or “Legal Document Review Completed”.

Building AI Is Hard—So Facebook Is Building AI That Builds AI

“…companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art [AI] forward, and paying for these top minds is a lot like paying for an NFL quarterback. That’s a bottleneck in the continued progress of artificial intelligence. And it’s not the only one. Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don’t work, running each one across dozens and potentially hundreds of machines.”


This article represents a true picture of where we are today for the average consumer and producer of information, and the companies that repurpose information, e.g. in the form of advertisements.  
The advancement and current progress of Artificial Intelligence, Machine Learning, analogously paints a picture akin to the 1970s with computers that fill rooms, and accept punch cards as input.
Today’s consumers have mobile computing power that is on par to the whole rooms of the 1970s; however, “more compute power” in a tinier package may not be the path to AI sentience.  How AI algorithm models are computed might need to take an alternate approach.  
In a classical computation system, a bit would have to be in one state or the other. However quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.
The construction, and validation of Artificial Intelligence, Machine Learning, algorithm models should be engineered on a Quantum Computing framework.

Time Lock Access: Seal Files in Cloud Storage

Is there value in providing users the ability to apply “Time Lock Access” to files in cloud storage?  Files are securely uploaded by their Owner.  After upload no one, including the Owner, may access / open the file(s).   Only after the date and time provided for the time lock passes, files will be available for access, and action may be taken, e.g.  Automatically email a link to the files.  More complex actions may be attached to the time lock release such as script execution using a simple set of rules as defined by the file Owner.

Solution already exists?  Please send me a link to the cloud integration product / plug in.

AI Personal Assistants are “Life Partners”

Artificial Intelligent (AI)  “Assistants”, or “Bots” are taken to the ‘next level’ when the assistant becomes a proactive entity based on the input from human intelligent experts that grows with machine learning.

Even the implication of an ‘Assistant’ v.  ‘Life Partner’ implies a greater degree of dynamic, and proactive interaction.   The cross over to becoming ‘Life Partner’ is when we go ‘above and beyond’ to help our partners succeed, or even survive the day to day.

Once we experience our current [digital, mobile] ‘assistants’ positively influencing our lives in a more intelligent, proactive manner, an emotional bond ‘grows’, and the investment in this technology will also expand.

Practical Applications Range:

  • Alcoholics Anonymous Coach , Mentor – enabling the human partner to overcome temporary weakness. Knowledge,  and “triggers” need to be incorporated into the AI ‘Partner’;  “Location / Proximity” reminder if person enters a shopping area that has a liquor store.  [AI] “Partner” help “talk down”
  • Understanding ‘data points’ from multiple sources, such as alarms,  and calendar events,  to derive ‘knowledge’, and create an actionable trigger.
    • e.g. “Did you remember to take your medicine?” unprompted; “There is a new article in N periodical, that pertains to your medicine.  Would you like to read it?”
    • e.g. 2 unprompted, “Weather calls for N inches of Snow.  Did you remember to service your Snow Blower this season?”
  • FinTech – while in department store XYZ looking to purchase Y over a certain amount, unprompted “Your credit score indicates you are ‘most likely’ eligible to ‘sign up’ for a store credit card, and get N percentage off your first purchase”  Multiple input sources used to achieve a potential sales opportunity.

IBM has a cognitive cloud of AI solutions leveraging IBM’s Watson.  Most/All of the 18 web applications they have hosted (with source) are driven by human interactive triggers, as with the “Natural Language Classifier”, which helps build a question-and-answer repository.

There are four bits that need to occur to accelerate adoption of the ‘AI Life Partner’:

  1. Knowledge Experts, or Subject Matter Experts (SME) need to be able to “pass on” their knowledge to build repositories.   IBM Watson Natural Language Classifier may be used.
  2. The integration of this knowledge into an AI medium, such as a ‘Digital Assistant’ needs to occur with corresponding ‘triggers’ 
  3. Our current AI ‘Assistants’ need to become [more] proactive as they integrate into our ‘digital’ lives, such as going beyond the setting of an alarm clock, hands free calling, or checking the sports score.   Our [AI] “Life Partner” needs to ‘act’ like buddy and fan of ‘our’ sports team.  Without prompting, proactively serve up knowledge [based on correlated, multiple sources], and/or take [acceptable] actions.
    1. E.g. FinTech – “Our schedule is open tonight, and there are great seats available, Section N, Seat A for ABC dollars on Stubhub.  Shall I make the purchase?”
      1. Partner with vendors to drive FinTech business rules.
  4. Take ‘advantage’ of more knowledge sources, such as the applications we use that collect our data.  Use multiple knowledge sources in concert, enabling the AI to correlate data and propose ‘complex’ rules of interaction.

Our AI ‘Life Partners’ may grow in knowledge, and mature the relationship between man and machine.   Incorporating derived rules leveraging machine learning, without input of a human expert, will come with risk and reward.