Tag Archives: Image Recognition

Who’s at the Front Door…Again?

Busy Time of Year, Happy Holidays

The holiday season brings lots of people to your front door. If you have a front door camera, you may be getting many alerts from your front door that let you know there is motion at the door. It would be great if the front doorbell cameras could take the next step and incorporate #AI facial/image recognition and notify you through #iOS notifications WHO is at the front door and, in some cases, which “uniformed” person is at the door, e.g. FedEx/UPS delivery person.

RIng iOS Notification
RIng iOS Notification

This facial recognition technology is already baked into Microsoft #OneDrive Photos and Apple #iCloud Photos. It wouldn’t be a huge leap to apply facial and object recognition to catalog the people who come to your front door as well as image recognition for uniforms that they are wearing, e.g., UPS delivery person.

iCloud/OneDrive Photos identify faces in your images, group by likeness, so the owner of the photo gallery can identify this group of faces as Grandma, for example. It may take one extra step for the camera owner to login into the image/video storage service and classify a group of videos converted to stills containing the face of Grandma. Facebook Meta also can tag the faces within pictures you upload and share. The Facebook app also can “guess” faces based on previously uploaded images.

No need to launch the Ring app and see who’s at the front door. Facial recognition can remove the step required to find out what is the motion at the front door and just post the iOS notification with the “who’s there”.

One less step to launching the Ring app and see who is at the front door.

Bose AR, Audio Augmented Reality – Use Cases

I’ve been enamored with Bose products for well over a decade. However,  we’ve seen quality brands enter the hi-fidelity audio market over that time.  Beyond quality design in their classic audio products, can Bose Augmented Reality (Bose AR) be the market differentiator?

Bose: Using a Bose-AR-equipped wearable, a smartphone, and an app-enabled with Bose AR, the new platform lets you hear what you see.

It sounds like Bose may come up with an initial design, sunglasses, but turn to 3rd party hardware manufacturers of all sorts to integrate Bose AR into other wearable products.

Bose Augmented Reality isn’t just about audio. The devices will use sensors to track head motions for gesture controls and work with GPS from a paired smartphone to track location.  The company also aspires to combine visual information with the Bose AR platform.

Bose AR Use Cases

  • Bose Augmented Reality device reenact historical events or speeches from landmarks and statues as you visit them.
  • The Bose and NFL partnership could be leveraged to get these AR units into the football player’s helmets.  Audio queues from the on-field lead, quarterback, and dynamically replayed/relayed at the appropriate time of required action by the receiver.
  • Audio directions to your gate when your GPS detects that you’ve arrived at the airport, or any other destination from your calendar.  Audio queues would be richer the more inclusive you are to the access to Calendars, To Do lists, etc.
  • Combine visual information with the Bose AR platform, too, so you could hear a translation of a sign you’re looking at.
  • Hear the history of a painting in a museum.

Time until it’s in consumer’s hands?  TBD.  Bose objective is to have the developer kit, including a pair of glasses, available later this year.

When I was on vacation in Athens, Greece, I created a post which had Greek actors running tours in their ancient, native garb.  The Bose AR could be a complementary offering to the tour, which includes live, greek local actors portraying out scenes in ancient ruins.  Record the scenes, and interact with them while walking through the Greek ruins in your Bose AR (Augmented Reality) glasses.

Greece, Prosperity, and Taxes: The World Will Come See You in AR

Please take a moment to prioritize the use cases, or add your own.

Takeaway

I’m a cheerleader for Bose, among several others in this space, but I question a Bose AR headset that produces a high fidelity sound. Most of the use cases listed should be able to “get along OK” with an average quality sound.  Maybe high definition AR games with a high level of realism might benefit from the high-quality sound. However, their site reads like Bose is positioning themselves as a component to be integrated into other AR headsets, i.e. “Bose-AR-equipped wearable

Google Search Enables Users to Upload Images for Searching with Visual Recognition. Yahoo and Bing…Not Yet

The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties.  The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types.  For example,  determine if a person [object] is “Joyful” or “Angry”.

As of the writing of this article,  search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results.   Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web.    From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item.  From there, an additional icon appears, a camera with the hint text “Search by Image”.  Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.

Google Search Upload Images
Google Search Upload Images

Select the “Upload an Image” tab, choose a file, and upload.  I used a fictional character, Max Headroom.   The search results were very good (see below).   I also attempted an uncommon shape, and it did not meet my expectations.   The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model.  If the shape is “Unique” the Google Search Image Engine did it’s job.

Google Image Search Results – Max Headroom

Max Headroom Google Search Results
Max Headroom Google Search Results

 

Google Image Search Results – Odd Shaped Metal Object

Google Search Results - Odd Shaped Metal Object
Google Search Results – Odd Shaped Metal Object

The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good.  However I would have liked to see better matches under the “Visually Similar Image” section.  Again, this is probably due to the image classification process, and potentially the diversity of image samples.

A Few Questions for Google

How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested?  How are new images incorporated into the Classifier model?  Are the user uploaded images now included in the Model (after model training is run again)?    Is Google Search Image incorporating ALL Internet images into Classifier Model(s)?  Is an alternate AI Image Recognition process used beyond Classifier Models?

Behind the Scenes

In addition, Google has provided a Cloud Vision API as part of their Google Cloud Platform.

I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting.  After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image.  I tried a number of samples, including my odd shaped metal, and I uploaded the image.  I think it performed fairly well on the “labels” (i.e. image attributes)

Odd Shaped Metal Sample Image
Odd Shaped Metal Sample Image

Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results.  In contrast, using Google’s Search Image Engine produced some “similar” web results.

Odd Shaped Metal Sample Image Web Results
Odd Shaped Metal Sample Image Web Results

Finally, I tested the Google Cloud Vision API with a self portrait image.  THIS was so cool.

Google Vision API - Face Attributes
Google Vision API – Face Attributes

The API brought back several image attributes specific to “Faces”.  It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.

Google Vision API - Labels
Google Vision API – Labels

The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.

Google Vision API - Web
Google Vision API – Web

Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.

The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %.  Of course,  I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)

Google Image Search Engine: Ian Face Image
Google Image Search Engine: Ian Face Image

 

Facebook Gifts Modified: ‘Like’ a Pic with caption ‘Nice Dress’, AI suggests, Buy Now, and presents vendors.

If Facebook uses facial recognition, why not expand to cover vendor / partner library catalogs, use the AI Image Recognition to identify objects, and ‘read’ and recognize simple phrases from the captions or comments of pictures.  If the caption says ‘nice dress’, you can use the AI image recognition rules engine to suggest N list of vendors, local and web, with the lowest price, and ‘Buy Now’ if you ‘Like’ the picture.