Tag Archives: AI Induction Engine

The Race Is On to Control Artificial Intelligence, and Tech’s Future

Amazon, Google, IBM and Microsoft are using high salaries and games pitting humans against computers to try to claim the standard on which all companies will build their A.I. technology.

In this fight — no doubt in its early stages — the big tech companies are engaged in tit-for-tat publicity stunts, circling the same start-ups that could provide the technology pieces they are missing and, perhaps most important, trying to hire the same brains.

For years, tech companies have used man-versus-machine competitions to show they are making progress on A.I. In 1997, an IBM computer beat the chess champion Garry Kasparov. Five years ago, IBM went even further when its Watson system won a three-day match on the television trivia show “Jeopardy!” Today, Watson is the centerpiece of IBM’s A.I. efforts.

Today, only about 1 percent of all software apps have A.I. features, IDC estimates. By 2018, IDC predicts, at least 50 percent of developers will include A.I. features in what they create.

Source: The Race Is On to Control Artificial Intelligence, and Tech’s Future – The New York Times

The next “tit-for-tat” publicity stunt should most definitely be a battle with robots, exactly like BattleBots, except…

  1. Use A.I. to consume vast amounts of video footage from previous bot battles, while identifying key elements of bot design that gave a bot the ‘upper hand’.  From a human cognition perspective, this exercise may be subjective. The BattleBot scoring process can play a factor in 1) conceiving designs, and 2) defining ‘rules’ of engagement.
  2. Use A.I. to produce BattleBot designs for humans to assemble.
  3. Autonomous battles, bot on bot, based on Artificial Intelligence battle ‘rules’ acquired from the input and analysis of video footage.

Alzheimer’s Inflicted: Technology to Help Remember Habitual Activities  

Anyone ever walk into a room and forget why on Earth you were there?  Were you about to get a cup of coffee, or get your car keys?  Wonderful!  It’s frustrating on my level of distraction, now magnify that to the Nth degree, Alzheimer’s.  Apply a rules and Induction engine, and poof!  A step further away from a managed care facility.

Teaching the AI Induction and rules engine may require the help of your 10 year old grandson.  Relatively easy,  you might need your grandson to sleep over for a day or two.

It’s all about variations of the same theme, tag a location, a room in an apartment, also action tag, such as getting a cup of coffee from the kitchen.  The repetitive nature of the activities with a location tag draws conclusions based on historical behavior.  The more variations of action and coinciding location tags, will begin to become ‘smarter’ about your habitual activities.  In addition, the calculations create a bell curve, a way to prioritize the most probable Location/Action tags used for the suggested behavior.    The ‘outliers’ on the bell curve will have the lowest probability of occurrence.

In addition, RFID tags installed in your apartment will increase the effectiveness of the ‘advice’ engine by adding more granular location tags.

Microchip_rfid_rice
Microchip RFID compared to the size of a grain of rice.
Beyond this ‘black box’ small, lightweight computer (smartphone) integrate a Bluetooth, NFC, WiFi antenna, a mobile application and you’re set.  A small, high quality Bluetooth microphone to interact with the app.  There’s also potential for exploring beyond the home.

Kidding, you don’t need that Grandson to help.  Speak into the mic, “Train” go into the room and say your activity, coffee.  This app will correlate your location, and action.  Everyone loves to be included in the Internet of Things, so app features like alerts for deviation from the location ‘map’ are possible.

In earnest, I am mostly certain that this type of solution exists.  Barriers to adoption could be computer/ smartphone generational gap.  Otherwise, someone is already producing the solution, and I just wasted a bus ride home.

Additionally, this software may be integrated with Apple’s Siri, Google Now,  Yahoo Index, Microsoft Cortana,  an extension of the Personal Assistant.

Samsung and Cambridge to Produce Interactive Avatar

BBC News – Is this interactive avatar the face of the future?.

I read this article, and instantly saw a logical progression of taking the eye & facial tracking software, such as built in Samsung S4, and integrating that feature with a cost effective version of the Cambridge project.  There are many applications:

  • The S Voice Drive, or another voice recognition component driving smartphone features may display, instead of the typical microphone, a ‘friendly’ avatar, such as one of several choices, e.g. a famous star, a comedian  an actress, or sports athlete.   Then the eye and facial tracking software may ask you what you want smartphone functions you want to perform.
  • An AI induction engine, i.e. an learning rules engine, may record your facial gestures, eye movements, as well as sounds, even inflection, as data points to correlate, so now the responses can be proactive, not reactive, e.g. the avatar would say, “Should I call your wife?   You seem tense, and you may want to call her to relax you.”
  • This is a slippery slope with respect to an AI providing advice on how to react to human output, such as eye movements and facial gestures.  It seems people are, at present, more comfortable with integrating mechanical AI induction engines, such as an eye movement to turn a page, read mail or make a phone call.  These very mechanical processes and allow people to feel more comfortable with the technology.