Category Archives: Business

Applying Artificial Intelligence & Machine Learning to Data Warehousing

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos.   Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight.  In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls.  Architecture  also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Key Features of iGuard:
  • Policy engine prevents “bad” queries before reaching database
  • Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
  • Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
  • Create policies using policy templates based on SQL Syntax:
    • Require JOIN to Security Table
    • Column Combination Restriction –  Ex. Prevents combining customer name and social security #
    • Table JOIN restriction –  Ex. Prevents joining two different tables in same query
    • Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
    • DDL/DCL restrictions (Create, Alter, Drop, Grant)
    • DQL/DML restrictions (Select, Insert, Update, Delete)
Data Access Policies

Blocks access to sensitive database objects

  • By user or user groups and time of day (shift) (e.g. ETL)
    • Schemas
    • Tables/Views
    • Columns
    • Rows
    • Stored Procs/Functions
    • Packages (Oracle)
Connection Policies

Blocks connections to the database

  • White list or black list by
    • DB User Logins
    • OS User Logins
    • Applications (BI, Query Apps)
    • IP addresses
Rule Templates Contain Customizable Messages

Each of the “Policy Templates”  has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging
iGuard Rules Messaging

 

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics.   The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process.  New rules will be suggested which exceed these defined parameters.  The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

Finally, here is a high level overview of the implementation architecture of iGuard.  For sales or pre-sales technical questions, please contact www.teleran.com

Teleran Logical Architecture
Teleran Logical Architecture

 

Currently Featured Clients
Teleran Featured Clients
Teleran Featured Clients

 

Google Search Enables Users to Upload Images for Searching with Visual Recognition. Yahoo and Bing…Not Yet

The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties.  The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types.  For example,  determine if a person [object] is “Joyful” or “Angry”.

As of the writing of this article,  search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results.   Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web.    From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item.  From there, an additional icon appears, a camera with the hint text “Search by Image”.  Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.

Google Search Upload Images
Google Search Upload Images

Select the “Upload an Image” tab, choose a file, and upload.  I used a fictional character, Max Headroom.   The search results were very good (see below).   I also attempted an uncommon shape, and it did not meet my expectations.   The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model.  If the shape is “Unique” the Google Search Image Engine did it’s job.

Google Image Search Results – Max Headroom
Max Headroom Google Search Results
Max Headroom Google Search Results

 

Google Image Search Results – Odd Shaped Metal Object
Google Search Results - Odd Shaped Metal Object
Google Search Results – Odd Shaped Metal Object

The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good.  However I would have liked to see better matches under the “Visually Similar Image” section.  Again, this is probably due to the image classification process, and potentially the diversity of image samples.

A Few Questions for Google

How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested?  How are new images incorporated into the Classifier model?  Are the user uploaded images now included in the Model (after model training is run again)?    Is Google Search Image incorporating ALL Internet images into Classifier Model(s)?  Is an alternate AI Image Recognition process used beyond Classifier Models?

Behind the Scenes

In addition, Google has provided a Cloud Vision API as part of their Google Cloud Platform.

I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting.  After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image.  I tried a number of samples, including my odd shaped metal, and I uploaded the image.  I think it performed fairly well on the “labels” (i.e. image attributes)

Odd Shaped Metal Sample Image
Odd Shaped Metal Sample Image

Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results.  In contrast, using Google’s Search Image Engine produced some “similar” web results.

Odd Shaped Metal Sample Image Web Results
Odd Shaped Metal Sample Image Web Results

Finally, I tested the Google Cloud Vision API with a self portrait image.  THIS was so cool.

Google Vision API - Face Attributes
Google Vision API – Face Attributes

The API brought back several image attributes specific to “Faces”.  It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.

Google Vision API - Labels
Google Vision API – Labels

The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.

Google Vision API - Web
Google Vision API – Web

Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.

The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %.  Of course,  I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)

Google Image Search Engine: Ian Face Image
Google Image Search Engine: Ian Face Image

 

Smartphone AI Digital Assistant Encroaching on the Virtual Receptionist

Businesses already exist which have developed and sell Virtual Receptionist , that handle many caller needs (e.g. call routing).

However, AI Digital Assistants such as Alexa, Cortana, Google Now, and Siri have an opportunity to stretch their capabilities even further.  Leveraging technologies such as Natural language processing (NLP) and Speech recognition (SR), as well as APIs into the Smartphone’s OS answer/calling capabilities, functionality can be expanded to include:

  • Call Screening –  The digital assistant asks for the name of the caller,  purpose of the call, and if the matter is “Urgent
    • A generic “purpose” response, or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
    • The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
    • The smartphone user may decide to accept the call, or reject the call and send the caller to voice mail.
  • Call / Digital Assistant Capabilities
    • The digital assistant may schedule a ‘tentative’ appointment within the user’s calendar.  The caller may ask to schedule a meeting, the digital assistant would access the user’s  calendar to determine availability.  If calendar indicates availability, a ‘tentative’ meeting will be entered.  The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
    • Allow recall of ‘generally available’ information.  If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of generally available information, and provide it.  The Smartphone user may use applications like Google Keep, and any note tagged with a label “Open Access” may be accessible to any caller.
    • Join the smartphone user’s social network, such as LinkedIn. If the caller knows the phone number of the person, but is unable to find the user through the social network directory, an invite may be requested by the caller.
    • Custom business workflows may also be triggered through the smartphone, such as “Pay by Phone”.

Small Business Innovation Research (SBIR) Grants Still Open Thru 2017

Entrepreneurs / Science Guys (and Gals),

Are you ready for a challenge, and 150,000 USD to begin to pursue your challenge?

That’s just SBIR Phase I, Concept Development (~6 months).  The second phase, Prototype Development, may be funded up to 1 MM USD, and last 24 months.

The Small Business Innovation Research (SBIR) program is a highly competitive program that encourages domestic small businesses to engage in Federal Research/Research and Development (R/R&D) that has the potential for commercialization. Through a competitive awards-based program, SBIR enables small businesses to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s R&D arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs.

The program’s goals are four-fold:
  1. Stimulate technological innovation.
  2. Meet Federal research and development needs.
  3. Foster and encourage participation in innovation and entrepreneurship by socially and economically disadvantaged persons.
  4. Increase private-sector commercialization of innovations derived from Federal research and development funding.

For more information on the program, please click here to download the latest SBIR Overview, which should have everything you need to know about the initiative.

Time is quickly running out to 1) Pick one of the Solicitation Topics provided by the US government; and 2) Submit your Proposal

For my query of the SBIR database of topics up for Contracts and Grants:  Phase I; Program = SBIR; Year = 2017

From that query, it produced 18 Contract / Grant opportunities.  Here are a few I thought would be interesting:

PAS-17-022
PAS-17-022
PAR-17-108
PAR-17-108
RFA-ES-17-004
RFA-ES-17-004
RFA-DA-17-010
RFA-DA-17-010

Click Here for the current, complete list of topics by the SBIR.  

 

Autonomous Software Layer for Vehicles through 3rd Party Integrators / Vendors

It seems that car manufacturers, among others, are building autonomous hardware (i.e. vehicle and other sensors) as well as the software to govern their usage.  Few companies are separating the hardware and software layers to explicitly carve out the autonomous software, for example.

Yes, there are benefits to tightly couple the autonomous hardware and software:

1. Proprietary implementations and intellectual property – Implementing autonomous vehicles within a single corporate entity may ‘fast track’ patents, and mitigate NDA challenges / risks

2. Synergies with two (or more) teams working in unison to implement functional goals.  However, this may also be accomplished through two organizations with tightly coupled teams.   Engaged, strong team leadership to help eliminate corp to corp BLOCKERS, must be in place to ensure deliverables.

There are also advantages with two separate organizations, one the software layer, and the other, the vehicle hardware implementation, i.e. sensors

1. Implementation of Autonomous Vehicle Hardware from AI Software enables multiple, strong alternate corporate perspectives These perspectives allow for a stronger, yet balanced approach to implementation.

2.  The AI Software for Autonomous vehicles, if contractually allowed, may work with multiple brand vehicles, implementing similar capabilities.  Vehicles now have capabilities / innovations shared across the car industry.  The AI Software may even become a standard in implementing Autonomous vehicles across the industry.

3. Working with multiple hardware / vehicle manufactures may allow the enablement of Software APIs, layer of implementation abstraction.  These APIs may enable similar approaches to implementation, and reduce redundancy and work can be used as ‘the gold standard’ in the industry.

4. We see commercial adoption of autonomous vehicle features such as “Auto Lane Change”, and “Automatic Emergency Braking.” so it makes sense to adopt standards through 3rd Party AI software Integrators / Vendors

5. Incorporating Checks and Balances to instill quality into the product and the process that governs it.

In summation, Car parts are typically not built in one geographic location, but through a global collaboration.  Autonomous software for vehicles should be externalized in order to overcome unbiased safety and security requirements.  A standards organization “with teeth” could orchestrate input from the industry, and collectively devise “best practices” for autonomous vehicles.

Kosher ‘Like’ Certifications and Oversight of Autonomous Vehicle Implementations

Do AI Rules Engines “deliberate” any differently between rules with moral weight over none at all. Rhetorical..?

The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and “business” input. In addition, implementations may vary between manufacturers and countries.

In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role?  Vehicle Insurance companies?  Car insurance will be rewritten when it comes to autonomous cars.  Some cars may have a  higher deductible or the cost of the policy may rise based upon the autonomous implementation.

Conditions Under Consideration:

1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?

1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?

1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?

The more sensors, the more input to the decision process.

Microsoft to Release AI Digital Agent SDK Integration with Visio and Deploy to Bing Search

Build and deploy a business AI Digital Assistant with the ease of building visio diagrams, or ‘Business Process Workflows’.  In addition, advanced Visio workflows offer external integration, enabling the workflow to retrieve information from external data sources; e.g. SAP CRM; Salesforce.

As a business, Digital Agent subscriber,  Microsoft Bing  search results will contain the business’ AI Digital Assistant created using Visio.  The ‘Chat’ link will invoke the business’ custom Digital Agent.  The Agent has the ability to answer business questions, or lead the user through “complex”, workflows.  For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server.  Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Chat
Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility.  Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

  • The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.
  • Microsoft may charge the business a “per chat interaction”  fee model, either per chat, or bundles with discounts based on volume.
  • In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant?  Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge.  The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

  • basic knowledge such as store hours of operation
  • more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”.  Many “how to” articles and videos do exist on the Internet already through blogs or youtube.    The AI digital assistant “curator of knowledge”  may ‘recommended’ existing content, or provide their own content.
  • Proprietary information can be disclosed in a narrative using the AI digital agent, e.g.  My order number is 123456B.  What is the status of my order?
  • Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed.  I would like to recommend her.  E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file.  Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature.  When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.

Intent Recognition: AI Digital Agents’ Best Ways to Interpret User Goals

Goal / Intent recognition may be the most difficult aspect of the AI Digital Agent’s workload, and not Natural language processing (NLP) or Voice Recognition.

Challenges of the Digital Agent
  • Many goals with very similar human utterance / syntax exist.
  • Just like with humans trying to interpret human utterances, many possibilities exist, and misinterpretation occurs.
  • Meeting someone for the first time, without historical context places additional burden on the interpreter of the intent.
  • There are innumerable opportunities to ask the same question, to request information, all achieving a similar, or the same goal.
Opportunities for Goal / Intent Accuracy
  • Business Process Workflows  may enable a very broad ‘category’ of subject matter to be disambiguated as the user traverses the workflow.  The intended goal may be derived from asking ‘narrowing’ questions, until the ‘goal’ is reached, or the user ‘falls out’ of the workflow.
  • Methodologies such as leveraging Regex to interpret utterances are difficult to create and maintain.
  • Utterances are still a necessity, their structure, and correlation to Business Process Workflows.  However, as the knowledge base grows, so does the complexity of curation of the content.   A librarian, or Content Curator may be required to integrate new information, deprecate stale content, and update workflows.
Ongoing, Partnership between Digital Agent and Human
  • Business Process Workflows may be initially designed and implemented by Subject Matter Experts (SMEs).  However, the SMEs might not have predicted all possible valid variations of the workflow, and achieve a different outcome for the triggered goal.
  • As the user traverses a workflow, they may encounter a limiting boundary, such as a Boolean question, which should have more than two options.  Some digital assistants may enable a user to walk on an alternate path by leveraging ‘human assisted’ goal achievement, such as escalation of a chat.  The ‘human assisted’ path may now have a third option, and this new option may be added to the Business Process Workflow for future use.

AI Email Workflows Eliminate Need for Manual Email Responses

When i read the article “How to use Gmail templates to answer emails faster.”  I thought wow, what an 1990s throwback!

Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.

Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.

Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.

Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.

Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.

Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors.  Workaround, of course, using service accounts with vastly higher folder quota / size.

My opinions do not reflect that of my employer.

AI Digital Assistant verse Search Engines

Aren’t AI Digital Assistants just like Search Engines? They both try to recognize your question or human utterance as best as possible to serve up your requested content. E.g.classic FAQ. The difference in the FAQ use case is the proprietary information from the company hosting the digital assistant may not be available on the internet.

Another difference between the Digital Assistant and a Search Engine is the ability of the Digital Assistant to ‘guide’ a person through a series of questions, enabling elaboration, to provide the user a more precise answer.

The Digital Assistant may use an interactive dialog to guide the user through a process, and not just supply the ‘most correct’ responses. Many people have flocked to YouTube for instructional type of interactive medium. When multiple workflow paths can be followed, the Digital Assistant has the upper hand.

The Digital Assistant has the capability of interfacing with 3rd parties (E.g. data stores with API access). For example, there may be a Digital Assistant hosted by Medical Insurance Co that has the ability to not only check the status of a claim, but also send correspondence to a medical practitioner on your behalf. A huge pain to call the insurance company, then the Dr office, then the insurance company again. Even the HIPPA release could be authenticated in real time, in line during the chat.  A digital assistant may be able to create a chat session with multiple participants.

Digital Assistants overruling capabilities over Search Engines are the ability to ‘escalate’ at any time during the Digital Assistant interaction. People are then queued for the next available human agent.

There have been attempts in the past, such as Ask.com (originally known as Ask Jeeves) is a question answering-focused e-business.  Google Questions and Answers (Google Otvety, Google Ответы) was a free knowledge market offered by Google that allowed users to collaboratively find good answers, through the web, to their questions (also referred as Google Knowledge Search).

My opinions are my own, and do not reflect my employer’s viewpoint.