Tag Archives: Photos

IBM didn’t inform people when it used their Flickr photos for facial recognition training – The Verge

The problem is more widespread then highlighted in the article.  It’s not just these high profile companies using “public domain” images to annotate with facial recognition notes and training machine learning (ML) models.  Anyone can scan the Internet for images of people, and build a vast library of faces.  These faces can then be used to train ML models.  In fact, using public domain images from “the Internet” will cut across multiple data sources, not just Flickr, which increases the sample size, and may improve the model.

The rules around the uses of “Public Domain” image licensing may need to be updated, and possibly a simple solution, add a watermark to any images that do not have permission to be used for facial recognition model training.  All image processors may be required to include a preprocessor to detect the watermark in the image, and if found, skip the image from being included in the training of models.

Source: IBM didn’t inform people when it used their Flickr photos for facial recognition training – The Verge

Politics around Privacy: Implementing Facial and Object Recognition

This Article is Not…

about deconstructing existing functionality of entire Photo Archive and Sharing platforms.

It is…

to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.

Backstory: The Asks / Needs

Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive,  Google Photos,  or iCloud.  A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat,  and/or Twitter).

Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months.  The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids.  In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.

Social Networking Platform Facebook

As far as I can remember the Social Networking platform Facebook has had the ability to tag faces in photos uploaded to the platform.  There are restrictions, such as whom you can tag from the privacy side, but the capability still exists. The Facebook platform also automatically identifies faces within photos, i.e. places a box around faces in a photo to make the person tagging capability easier.  So, in essence, there is an “intelligent capability” to identify faces in a photo.  It seems like the Facebook platform allows you to see “Photos of You”,  but what seems to be missing is to search for all photos of Fred Smith, a friend of yours, even if all his photos are public.    By design, it sounds fit for the purpose of the networking platform.

Auto Curation

  1. Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
  2. The Auto Curation process scans photos for:
    1. “Commonly Identifiable Objects”, such as #Car, #Clock,  #Fireworks, and #People
    2. Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
    3. Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn”  faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
  3. Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.

Manual Curation

After upload,  and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags.  Using the photo curator function on the Cloud Storage Service can tag any “objects” in the photo using Rectangle or Lasso Select.

Curation to Take Action

Once photo libraries are curated, the library owner(s) can:

  • Automatically build albums based one or more #tags
  • Smart Albums automatically update, e.g.  after ingestion and Auto Curation.  Albums are tag sensitive and update with new pics that contain certain people or objects.  The user/ librarian may dictate logic for tags.

Where is this Functionality??

Why are may major companies not implementing facial (and object) recognition?  Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.

Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly?  Do privacy concerns at the moment, leave others to become trailblazers in this area?

Google Search Enables Users to Upload Images for Searching with Visual Recognition. Yahoo and Bing…Not Yet

The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties.  The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types.  For example,  determine if a person [object] is “Joyful” or “Angry”.

As of the writing of this article,  search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results.   Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web.    From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item.  From there, an additional icon appears, a camera with the hint text “Search by Image”.  Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.

Google Search Upload Images
Google Search Upload Images

Select the “Upload an Image” tab, choose a file, and upload.  I used a fictional character, Max Headroom.   The search results were very good (see below).   I also attempted an uncommon shape, and it did not meet my expectations.   The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model.  If the shape is “Unique” the Google Search Image Engine did it’s job.

Google Image Search Results – Max Headroom
Max Headroom Google Search Results
Max Headroom Google Search Results

 

Google Image Search Results – Odd Shaped Metal Object
Google Search Results - Odd Shaped Metal Object
Google Search Results – Odd Shaped Metal Object

The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good.  However I would have liked to see better matches under the “Visually Similar Image” section.  Again, this is probably due to the image classification process, and potentially the diversity of image samples.

A Few Questions for Google

How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested?  How are new images incorporated into the Classifier model?  Are the user uploaded images now included in the Model (after model training is run again)?    Is Google Search Image incorporating ALL Internet images into Classifier Model(s)?  Is an alternate AI Image Recognition process used beyond Classifier Models?

Behind the Scenes

In addition, Google has provided a Cloud Vision API as part of their Google Cloud Platform.

I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting.  After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image.  I tried a number of samples, including my odd shaped metal, and I uploaded the image.  I think it performed fairly well on the “labels” (i.e. image attributes)

Odd Shaped Metal Sample Image
Odd Shaped Metal Sample Image

Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results.  In contrast, using Google’s Search Image Engine produced some “similar” web results.

Odd Shaped Metal Sample Image Web Results
Odd Shaped Metal Sample Image Web Results

Finally, I tested the Google Cloud Vision API with a self portrait image.  THIS was so cool.

Google Vision API - Face Attributes
Google Vision API – Face Attributes

The API brought back several image attributes specific to “Faces”.  It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.

Google Vision API - Labels
Google Vision API – Labels

The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.

Google Vision API - Web
Google Vision API – Web

Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.

The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %.  Of course,  I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)

Google Image Search Engine: Ian Face Image
Google Image Search Engine: Ian Face Image

 

NFC Replaces Secure Digital Memory for Data Transfer in Art Galleries?

NFC (Near Field Communications) has significant potential in the transfer of information, and has already proven to be a lightweight technology to transfer and store data.  We have already seen at this year’s CES conference business cards enable the transfer of songs from an NFC enabled business card to a car radio.  Samsung has enabled this technology in their smartphones to transfer data such as videos and pictures.

There will come a day soon where we will have built in storage in a device, such as a picture frame, or television, and the NFC card will allow the transfer of information to this temporary buffer in the device for playing music, watching videos, or looking at pictures. This day is not far off.  Yes, those LCD picture frames in your home that take SD memory are outdated.

Apple made an acquisition of a company that has the ability to enable an LCD touch screen to raise a keyboard through the touch screen, so the user has the tactile contact of the keyboard.  We may go back to typing on the keyboard without looking, like we do with smartphones with keyboards.  I envision an art gallery that has huge LCD screens all around the room, and switching an artist on display would be as easy as walking over to each LCD picture frame and taping the frame enabled with this raised, tactile LCD technology.  In the artist’s creation, the paint of the brushstrokes may appear raised from the LCD canvas, with a three dimensional effect on the picture frame.  An artist making an art creation would make brush strokes using a digital brush, pressing like you would on a canvas, choosing the appropriate paint may record the additional information required to display a three dimensional painting.

Picture that.

Addendum:

After additional research, the one inhibitor, which may pose a significant barrier, and provides optimal data transfer of smaller data packets.

The maximum data transfer rate of NFC (424 kbit/s) is slower than that of Bluetooth V2.1 (2.1 Mbit/s), as noted in Wikipedia.

The speed of MicroSD Speed Class 10 is 10 MB/sec, significantly greater, as well as the advanced UHS, or Ultra High Speed Class, UHS-I has a 50 MB/s, and UHS-II has a theoretical maximum transfer rate of 312 MB/s.

Although, the idea of NFC, or Bluetooth for the matter, has a conceptual idea of tap and transfer high rates for large data to internal memory buffers in devices, the reality is that the  WiFi connectivity speeds outweigh both NFC and Bluetooth, and MicroSD, physical medium outweighs NFC / Bluetooth.  If this idea had merit today, you would need to apply a WiFi connected device to get the maximum throughput without physical media, such as secure digital, or continue to leverage physical media for transfer and still use the memory buffer as a temporary storage in devices, as noted in the article.