Tag Archives: Machine Learning

Follow the Breadcrumbs: Identify and Transform

Trends – High Occurrence, Word Associations

Over the last two decades, I’ve been involved in several solutions that incorporated artificial intelligence and in some cases machine learning. I’ve understood at the architectural level, and in some cases, a deeper dive.

I’ve had the urge to perform a data trending exercise, where not only do we identify existing trends, similar to “out of the box” Twitter capabilities, we can also augment “the message” as trends unfold. Also, probably AI 101. However, I wanted to submerge myself in understanding this Data Science project. My Solution Statement: Given a list of my interests, we can derive sentence fragments from Twitter, traverse the tweet, parsing each word off as a possible “breadcrumb”. Then remove the Stop Words, and voila, words that can identify trends, and can be used to create/modify trends.

Finally, to give the breadcrumbs, and those “words of interest” greater depth, using the Oxford Dictionaries API we can enrich the data with things like their Thesaurus and Synonyms.

Gotta Have a Hobby

It’s been a while now that I’ve been hooked on Microsoft Power Automate, formerly known as Microsoft Flow. It’s relatively inexpensive and has the capabilities to be a tremendous resource for almost ANY project. There is a FREE version, and then the paid version is $15 per month. No brainer to pick the $15 tier with bonus data connectors.

I’ve had the opportunity to explore the platform and create workflows. Some fun examples, initially, using MS Flow, I parsed RSS feeds, and if a criterion was met, I’d get an email. I did the same with a Twitter feed. I then kicked it up a notch and inserted these records of interest into a database. The library of Templates and Connectors is staggering, and I suggest you take a look if you’re in a position where you need to collect and transform data, followed by a Load and a notification process.

What Problem are we Trying to Solve?

How are trends formed, how are they influenced, and what factors influence them? The most influential people providing input to a trend? Influential based on location? Does language play a factor on how trends are developed? End Goal: driving trends, and not just observing them.

Witches Brew – Experiment Ingredients:

Obtaining and Scrubbing Data

Articles I’ve read regarding Data Science projects revolved around 5 steps:

  1. Obtain Data
  2. Scrub Data
  3. Explore Data
  4. Model Data
  5. Interpreting Data

The rest of this post will mostly revolve around steps 1 and 2. Here is a great article that goes through each of the steps in more detail: 5 Steps of a Data Science Project Lifecycle

Capturing and Preparing the Data

The data set is arguably the most important aspect of Machine Learning. Not having a set of data that conforms to the bell curve and consists of all outliers will produce an inaccurate reflection of the present, and poor prediction of the future.

First, I created a table of search criteria based on topics that interest me.

Search Criteria List

Then I created a Microsoft Flow for each of the search criteria to capture tweets with the search text, and insert the results into a database table.

MS Flow - Twitter : Ingestion of Learning Tweets
MS Flow – Twitter: Ingestion of Learning Tweets

Out of the total 7450 tweets collected from all the search criteria, 548 tweets were from the Search Criteria “Learning” (22).

Data Ingestion - Twitter
Data Ingestion – Twitter

After you’ve obtained the data, you will need to parse the Tweet text into “breadcrumbs”, which “lead a path” to the Search Criteria.

Machine Learning and Structured Query Language (SQL)

This entire predictive trend analysis could be much easier with a more restrictive syntax language like SQL instead of English Tweets. Parsing SQL statements would be easier to make correlations. For example, the SQL structure can be represented such as: SELECT Col1, Col2 FROM TableA where Col2 = ‘ABC’. Based on the data set size, we may be able to extrapolate and correlate rows returned to provide valuable insights, e.g. projected impact performance of the query to the data warehouse.

R language and R Studio

Preparing Data Sets Using Tools Designed to Perform Data Science.

R language and R Studio seems to be very powerful when dealing with large data sets, and syntax makes it easy to “clean” the data set. However, I still prefer SQL Server and a decent query tool. Maybe my opinion will change over time. The most helpful thing I’ve seen from R studio is to create new data frames and the ability to rollback to a point in time, i.e. the previous version of the data set.

Changing column data type on the fly in R studio is also immensely valuable. For example, the data in the column are integers but the data table/column definition is a string or varchar. The user would have to drop the table in SQL DB, recreate the table with the new data type, and then reload the data. Not so with R.

Email Composer: Persona Point of View (POV) Reviews

First, there was Spell Check, next Thesaurus, Synonyms, contextual grammar suggestions, and now Persona, Point of View Reviews. Between the immensely accurate and omnipresent #Grammarly and #Google’s #Gmail Predictive Text, I starting thinking about the next step in the AI and Human partnership on crafting communications.

Google Gmail Predictive Text

Google gMail predictive text had me thinking about AI possibilities within an email, and it occurred to me, I understand what I’m trying to communicate to my email recipients but do I really know how my message is being interpreted?

Google gMail has this eerily accurate auto suggestive capability, as you type out your email sentence gMail suggests the next word or words that you plan on typing. As you type auto suggestive sentence fragments appear to the right of the cursor. It’s like reading your mind. The most common word or words that are predicted to come next in the composer’s eMail.

Personas

In the software development world, it’s a categorization or grouping of people that may play a similar role, behave in a consistent fashion. For example, we may have a lifecycle of parking meters, where the primary goal is the collection of parking fees. In this case, personas may include “meter attendant”, and “the consumer”. These two personas have different goals, and how they behave can be categorized. There are many such roles within and outside a business context.

In many software development tools that enable people to collect and track user stories or requirements, the tools also allow you to define and correlate personas with user stories.

As in the case of email composition, once the email has been written, the composer may choose to select a category of people they would like to “view from their perspective”. Can the email application define categories of recipients, and then preview these emails from their perspective viewpoints?

What will the selected persona derive from the words arranged in a particular order? What meaning will they attribute to the email?

Use Personas in the formulation of user stories/requirements; understand how Personas will react to “the system”, and changes to the system.

Finally the use of the [email composer] solution based on “actors” or “personas”. What personas’ are “out of the box”? What personas will need to be derived by the email composer’s setup of these categories of people? Wizard-based Persona definitions?

There are already software development tools like Azure DevOps (ADO), which empower teams to manage product backlogs and correlate “User Stories”, or “Product Backlog Items” with Personas. These are static personas, that are completely user-defined, and no intelligence to correlate “user stories” with personas”. Users of ADO must create these links.

Now, technology can assist us to consider the intended audience, a systematic, biased perspective using Artificial Intelligence to inspect your email based on selected “point of view” (a Persons) of the intended email. Maybe your email will be misconstrued as abrasive, and not the intended response.

Deep Learning vs Machine Learning – Overview & Differences – Morioh

Machine learning and deep learning are two subsets of artificial intelligence which have garnered a lot of attention over the past two years. If you’re here looking to understand both the terms in the simplest way possible, there’s no better place to be..
— Read on morioh.com/p/78e1357f65b0

Riddle of the Sphinx: Improving Machine Learning

Data Correlations Require Perspective

As I was going to St. Ives,

I met a man with seven wives,

Each wife had seven sacks,

Each sack had seven cats,

Each cat had seven kits:

Kits, cats, sacks, and wives,

How many were there going to St. Ives?

One.

This short example may confound man and machine. How does a rules engine work, how does it make correlations to derive an answer to this and other riddles?  If AI, a rules engine is wrong trying to solve this riddle, how does it use machine learning to adjust, and tune its “model” to draw an alternate conclusion to this riddle?

Training rules engines using machine learning and complex riddles may require AI to define relationships not previously considered, analogously to how a boy or man consider solving riddles.  Man has more experiences than a boy, widening their model to increase the possible answer sets. But how to conclude the best answer?  Question sentence fragments may differ over a lifetime, hence the man may have more context as to the number of ways the question sentence fragment may be interpreted.

Adding Context: Historical and Pop Culture

There are some riddles thousands of years old.  They may have spawned from another culture in another time and survived and evolved to take on a whole new meaning.  Understanding the context of the riddle may be the clue to solving it.

Layers of historical culture provide context to the riddle, and the significance of a word or phrase in one period of history may wildly differ.  When you think of “periods of history”, you might think of the pinnacle of the Roman empire, or you may compare the 1960s, the 70s, 80s, etc.

Asking a question of an AI, rules engine, such as a chatbot may need contextual elements, such as geographic location, and “period in history”, additional dimensions to a data model.

Many chatbots have no need for additional context, a referential subtext, they simply are “Expert Systems in a box”.  Now digital assistants may face the need for additional dimensions of context, as a general knowledge digital agent spanning expertise without bounds.

 Sophocles: The Sphinx’s riddle

Written in the fifth century B.C., Oedipus the King is one of the most famous pieces of literature of all time, so it makes sense that it gave us one of the most famous riddles of all time.

What goes on four legs in the morning, on two legs at noon, and on three legs in the evening?

A human.

Humans crawl on hands and knees (“four legs”) as a baby, walk on two legs in mid-life (representing “noon”) and use a walking stick or can (“three legs”) in old age.

A modern interpretation of the riddle may not allow for the correlation and solving the riddle.  As such “three legs”, i.e. a cane, may be elusive, as we think of the elderly on four wheels on a wheelchair.

In all sincerity, this article is not about an AI rules engine “firing rules” using a time dimension, such as:

  • Not letting a person gain entry to a building after a certain period of time, or…
  • Providing a time dimension to “Parental Controls” on a Firewall / Router, the Internet is “cut off” after 11 PM.

Adding a date/time dimension to the question may produce an alternate question. The context of the time changes the “nature” of the question, and therefore the answer as well.

IBM didn’t inform people when it used their Flickr photos for facial recognition training – The Verge

The problem is more widespread then highlighted in the article.  It’s not just these high profile companies using “public domain” images to annotate with facial recognition notes and training machine learning (ML) models.  Anyone can scan the Internet for images of people, and build a vast library of faces.  These faces can then be used to train ML models.  In fact, using public domain images from “the Internet” will cut across multiple data sources, not just Flickr, which increases the sample size, and may improve the model.

The rules around the uses of “Public Domain” image licensing may need to be updated, and possibly a simple solution, add a watermark to any images that do not have permission to be used for facial recognition model training.  All image processors may be required to include a preprocessor to detect the watermark in the image, and if found, skip the image from being included in the training of models.

Source: IBM didn’t inform people when it used their Flickr photos for facial recognition training – The Verge

Man Trains Dog. Dog Trains AI Model. Cats Rule the World.

Researchers Teach AI to Think like a Dog

Source: Researchers teach AI to think like a dog and find out what they know about the world – The Verge

Animals could provide a new source of training data for AI systems.

To train AI to think like a dog, the researchers first needed data. They collected this in the form of videos and motion information captured from a single dog, a Malamute named Kelp. A total of 380 short videos were taken from a GoPro camera mounted to the dog’s head, along with movement data from sensors on its legs and body.

They captured a dog going about its daily life — walking, playing fetch, and going to the park.

Researchers analyzed Kelp’s behavior using deep learning, an AI technique that can be used to sift patterns from data, matching the motion data of Kelp’s limbs and the visual data from the GoPro with various doggy activities.

The resulting neural network trained on this information could predict what a dog would do in certain situations. If it saw someone throwing a ball, for example, it would know that the reaction of a dog would be to turn and chase it.

The predictive capacity of their AI system was very accurate, but only in short bursts. In other words, if the video shows a set of stairs, then you can guess the dog is going to climb them. But beyond that, life is simply too varied to predict. 

 

Dogs “clearly demonstrate visual intelligence, recognizing food, obstacles, other humans, and animals,” so does a neural network trained to act like a dog show the same cleverness?

It turns out yes.

Researchers applied two tests to the neural network, asking it to identify different scenes (e.g., indoors, outdoors, on stairs, on a balcony) and “walkable surfaces” (which are exactly what they sound like: places can walk). In both cases, the neural network was able to complete these tasks with decent accuracy using just the basic data it had of a dog’s movements and whereabouts.

Dog AI Model Training
Dog AI Model Training

 

Politics around Privacy: Implementing Facial and Object Recognition

This Article is Not…

about deconstructing existing functionality of entire Photo Archive and Sharing platforms.

It is…

to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.

Backstory: The Asks / Needs

Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive,  Google Photos,  or iCloud.  A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat,  and/or Twitter).

Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months.  The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids.  In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.

Social Networking Platform Facebook

As far as I can remember the Social Networking platform Facebook has had the ability to tag faces in photos uploaded to the platform.  There are restrictions, such as whom you can tag from the privacy side, but the capability still exists. The Facebook platform also automatically identifies faces within photos, i.e. places a box around faces in a photo to make the person tagging capability easier.  So, in essence, there is an “intelligent capability” to identify faces in a photo.  It seems like the Facebook platform allows you to see “Photos of You”,  but what seems to be missing is to search for all photos of Fred Smith, a friend of yours, even if all his photos are public.    By design, it sounds fit for the purpose of the networking platform.

Auto Curation

  1. Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
  2. The Auto Curation process scans photos for:
    1. “Commonly Identifiable Objects”, such as #Car, #Clock,  #Fireworks, and #People
    2. Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
    3. Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn”  faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
  3. Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.

Manual Curation

After upload,  and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags.  Using the photo curator function on the Cloud Storage Service can tag any “objects” in the photo using Rectangle or Lasso Select.

Curation to Take Action

Once photo libraries are curated, the library owner(s) can:

  • Automatically build albums based one or more #tags
  • Smart Albums automatically update, e.g.  after ingestion and Auto Curation.  Albums are tag sensitive and update with new pics that contain certain people or objects.  The user/ librarian may dictate logic for tags.

Where is this Functionality??

Why are may major companies not implementing facial (and object) recognition?  Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.

Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly?  Do privacy concerns at the moment, leave others to become trailblazers in this area?

Applying Artificial Intelligence & Machine Learning to Data Warehousing

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos.   Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight.  In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls.  Architecture  also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Key Features of iGuard:
  • Policy engine prevents “bad” queries before reaching database
  • Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
  • Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
  • Create policies using policy templates based on SQL Syntax:
    • Require JOIN to Security Table
    • Column Combination Restriction –  Ex. Prevents combining customer name and social security #
    • Table JOIN restriction –  Ex. Prevents joining two different tables in same query
    • Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
    • DDL/DCL restrictions (Create, Alter, Drop, Grant)
    • DQL/DML restrictions (Select, Insert, Update, Delete)
Data Access Policies

Blocks access to sensitive database objects

  • By user or user groups and time of day (shift) (e.g. ETL)
    • Schemas
    • Tables/Views
    • Columns
    • Rows
    • Stored Procs/Functions
    • Packages (Oracle)
Connection Policies

Blocks connections to the database

  • White list or black list by
    • DB User Logins
    • OS User Logins
    • Applications (BI, Query Apps)
    • IP addresses
Rule Templates Contain Customizable Messages

Each of the “Policy Templates”  has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging
iGuard Rules Messaging

 

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics.   The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process.  New rules will be suggested which exceed these defined parameters.  The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

Finally, here is a high level overview of the implementation architecture of iGuard.  For sales or pre-sales technical questions, please contact www.teleran.com

Teleran Logical Architecture
Teleran Logical Architecture

 

Currently Featured Clients

Teleran Featured Clients
Teleran Featured Clients