Animals could provide a new source of training data for AI systems.
To train AI to think like a dog, the researchers first needed data. They collected this in the form of videos and motion information captured from a single dog, a Malamute named Kelp. A total of 380 short videos were taken from a GoPro camera mounted to the dog’s head, along with movement data from sensors on its legs and body.
They captured a dog going about its daily life — walking, playing fetch, and going to the park.
Researchers analyzed Kelp’s behavior using deep learning, an AI technique that can be used to sift patterns from data, matching the motion data of Kelp’s limbs and the visual data from the GoPro with various doggy activities.
The resulting neural network trained on this information could predict what a dog would do in certain situations. If it saw someone throwing a ball, for example, it would know that the reaction of a dog would be to turn and chase it.
The predictive capacity of their AI system was very accurate, but only in short bursts. In other words, if the video shows a set of stairs, then you can guess the dog is going to climb them. But beyond that, life is simply too varied to predict.
Dogs “clearly demonstrate visual intelligence, recognizing food, obstacles, other humans, and animals,” so does a neural network trained to act like a dog show the same cleverness?
It turns out yes.
Researchers applied two tests to the neural network, asking it to identify different scenes (e.g., indoors, outdoors, on stairs, on a balcony) and “walkable surfaces” (which are exactly what they sound like: places can walk). In both cases, the neural network was able to complete these tasks with decent accuracy using just the basic data it had of a dog’s movements and whereabouts.
about deconstructing existing functionality of entire Photo Archive and Sharing platforms.
to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.
Backstory: The Asks / Needs
Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive, Google Photos, or iCloud. A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat, and/or Twitter).
Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months. The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids. In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.
Social Networking Platform Facebook
As far as I can remember the Social Networking platform Facebook has had the ability to tag faces in photos uploaded to the platform. There are restrictions, such as whom you can tag from the privacy side, but the capability still exists. The Facebook platform also automatically identifies faces within photos, i.e. places a box around faces in a photo to make the person tagging capability easier. So, in essence, there is an “intelligent capability” to identify faces in a photo. It seems like the Facebook platform allows you to see “Photos of You”, but what seems to be missing is to search for all photos of Fred Smith, a friend of yours, even if all his photos are public. By design, it sounds fit for the purpose of the networking platform.
Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
The Auto Curation process scans photos for:
“Commonly Identifiable Objects”, such as #Car, #Clock, #Fireworks, and #People
Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn” faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.
After upload, and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags. Using the photo curator function on the Cloud Storage Service can tag any “objects” in the photo using Rectangle or Lasso Select.
Curation to Take Action
Once photo libraries are curated, the library owner(s) can:
Automatically build albums based one or more #tags
Smart Albums automatically update, e.g. after ingestion and Auto Curation. Albums are tag sensitive and update with new pics that contain certain people or objects. The user/ librarian may dictate logic for tags.
Where is this Functionality??
Why are may major companies not implementing facial (and object) recognition? Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.
Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly? Do privacy concerns at the moment, leave others to become trailblazers in this area?
Protecting the Data Warehouse with Artificial Intelligence
Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos. Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight. In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls. Architecture also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.
Key Features of iGuard:
Policy engine prevents “bad” queries before reaching database
Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
Create policies using policy templates based on SQL Syntax:
Require JOIN to Security Table
Column Combination Restriction – Ex. Prevents combining customer name and social security #
Table JOIN restriction – Ex. Prevents joining two different tables in same query
Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
By user or user groups and time of day (shift) (e.g. ETL)
Blocks connections to the database
White list or black list by
DB User Logins
OS User Logins
Applications (BI, Query Apps)
Rule Templates Contain Customizable Messages
Each of the “Policy Templates” has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.
Machine Learning: Curbing Inappropriate, or Long Running Queries
iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics. The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process. New rules will be suggested which exceed these defined parameters. The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.
Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products. The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.
As a first step, Google’s “Personal” Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data. No doubt other Google products are coming soon.
Big benefits are not just for the consumer to search through their Personal Goggle data, but provide that consolidated view to the AI Assistant. Does the Google [Digital] Assistant already have access to Google Keep data, for example. Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage? If so, these…
interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.
“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”
I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.
Modeling Personal, AI + Human Interactions
Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW). Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.
Actionable data point correlations can trigger multiple goals in parallel. However, the execution of goal based rules would need to be managed. The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.
As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.
The proactive Google Assistant may suggest to book your annual fishing trip soon. Elevated Interaction to Consumer / User.
The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search. Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category. Building workflows that are ‘predefined’ for a core set of goals/rules.
AI Assistant may search user’s photo archive on the server side. Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
Photos from around ‘August’ may be earmarked for Assistant use
Photos may be geo tagged, e.g. Lake Champlain, which is known for its fishing.
All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository, goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.
This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process. Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.
Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” . Or, food may not be the best place to start, i.e. Amazon Dash
Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.
I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.
It seems Google is rolling out this capability iteratively, and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.
Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.
As initially reported by Search Engine Roundtable, the output of Gmail results appear in a single column text output with links to the content, in this case email.
It appears the sequence of the “Personal Search” output:
Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.
Group Permissions – Searching
Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly. At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:
Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.
Are you adequately prepared for your next litigation? Going into court with an army of Co-Counsel making you feel more confident, more prepared? Make sure you bring along the AI Whispering Digital Co-Counsel. Co-Counsel that doesn’t break a sweat, get nervous, and is always prepared. He even takes the opportunity to learn while on the job, machine learning.
The whispering digital agent for advising litigators “just-in-time” rebuttal citing historical precedence, for example. Digital Co-Counsel analyzes the dialog within the courtroom to identify ‘goals’, the intent of the conversation(s). The Digital Co-Counsel identifies the current workflow, which may be identified as Cross or Direct examination, Opening Statement, and Closing Argument.
Realtime observation of a court case and advice based on:
Observed dialog interactions between all parties involved in the case, such as opposing counsel, witnesses, subject matter experts, may trigger “guidance” from the Digital Co-Counsel based on a compound of utterances, and identified workflow.
Court case evidence submitted may be digitized, and analyzed based on a [predetermined]combination of identified attributes of submitted evidence. This evidence, in turn, may be rebutted, by counter arguments, alternate ‘perspectives’ or present “evidence” to rebut
The introduction of ‘bias’ toward the opposing council.**
Implementation of the Digital Co-Council may be through a Smartphone application, and use a bluetooth throughout the case.
My opinions are my own, and do not necessarily reflect my employer’s viewpoint.
The AI personal assistant with the “most usage” spanning connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives. An Amazon commercial just aired which depicted a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her. The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.
What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:
Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
Proactive, suggestive actions may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.
Three main sources of AI Personal Assistant value add:
A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ; in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted. In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
An abstraction layer between the user and their apps; The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day rituals.
Proactive and chatty assistants may win the “Assistant of Choice” on all platforms. Being proactive means collecting data more often then when it’s just you asking questions ADHOC. Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location. E.g. “Don’t forget milk” says Siri, as your passing the grocery store. Around the time I leave work Google maps tells me if I have traffic and my ETA.
It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.
Microsoft Corona has an iOS app and widget, which is wrapped around the OS. Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.
Antitrust violation by mobile smartphone maker Apple: iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri, “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.
Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default. Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.
ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution. Has Apple has taken steps to tightly couple Siri with it’s iOS?
AI Personal Assistant ‘Wish” list:
Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions. The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps. After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”. There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders. In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
Capability to interact with IoT using user defined workflows. Hardware and software may exist in the Cloud.
Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.
Apple could already be making the changes as a natural course of their product evolution. Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.
Amazon, Google, IBM and Microsoft are using high salaries and games pitting humans against computers to try to claim the standard on which all companies will build their A.I. technology.
In this fight — no doubt in its early stages — the big tech companies are engaged in tit-for-tat publicity stunts, circling the same start-ups that could provide the technology pieces they are missing and, perhaps most important, trying to hire the same brains.
The next “tit-for-tat” publicity stunt should most definitely be a battle with robots, exactly like BattleBots, except…
Use A.I. to consume vast amounts of video footage from previous bot battles, while identifying key elements of bot design that gave a bot the ‘upper hand’. From a human cognition perspective, this exercise may be subjective. The BattleBot scoring process can play a factor in 1) conceiving designs, and 2) defining ‘rules’ of engagement.
Use A.I. to produce BattleBot designs for humans to assemble.
Autonomous battles, bot on bot, based on Artificial Intelligence battle ‘rules’ acquired from the input and analysis of video footage.