The problem is more widespread then highlighted in the article. It’s not just these high profile companies using “public domain” images to annotate with facial recognition notes and training machine learning (ML) models. Anyone can scan the Internet for images of people, and build a vast library of faces. These faces can then be used to train ML models. In fact, using public domain images from “the Internet” will cut across multiple data sources, not just Flickr, which increases the sample size, and may improve the model.
The rules around the uses of “Public Domain” image licensing may need to be updated, and possibly a simple solution, add a watermark to any images that do not have permission to be used for facial recognition model training. All image processors may be required to include a preprocessor to detect the watermark in the image, and if found, skip the image from being included in the training of models.
The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties. The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types. For example, determine if a person [object] is “Joyful” or “Angry”.
As of the writing of this article, search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results. Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web. From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item. From there, an additional icon appears, a camera with the hint text “Search by Image”. Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.
Select the “Upload an Image” tab, choose a file, and upload. I used a fictional character, Max Headroom. The search results were very good (see below). I also attempted an uncommon shape, and it did not meet my expectations. The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model. If the shape is “Unique” the Google Search Image Engine did it’s job.
Google Image Search Results – Max Headroom
Google Image Search Results – Odd Shaped Metal Object
The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good. However I would have liked to see better matches under the “Visually Similar Image” section. Again, this is probably due to the image classification process, and potentially the diversity of image samples.
A Few Questions for Google
How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested? How are new images incorporated into the Classifier model? Are the user uploaded images now included in the Model (after model training is run again)? Is Google Search Image incorporating ALL Internet images into Classifier Model(s)? Is an alternate AI Image Recognition process used beyond Classifier Models?
I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting. After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image. I tried a number of samples, including my odd shaped metal, and I uploaded the image. I think it performed fairly well on the “labels” (i.e. image attributes)
Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results. In contrast, using Google’s Search Image Engine produced some “similar” web results.
Finally, I tested the Google Cloud Vision API with a self portrait image. THIS was so cool.
The API brought back several image attributes specific to “Faces”. It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.
The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.
Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.
The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %. Of course, I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)
Businesses already exist which have developed and sell Virtual Receptionist, that handle many caller needs (e.g. call routing).
However, AI Digital Assistants such as Alexa, Cortana, Google Now, and Siri have an opportunity to stretch their capabilities even further. Leveraging technologies such as Natural language processing (NLP) and Speech recognition (SR), as well as APIs into the Smartphone’s OS answer/calling capabilities, functionality can be expanded to include:
Call Screening – The digital executive assistant asks for the name of the caller, purpose of the call, and if the matter is “Urgent”
A generic “purpose” response or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
The smartphone user may decide to accept the call, or reject the call and send the caller to voicemail.
Call / Digital Assistant Capabilities
The digital executive assistant may schedule a ‘tentative’ appointment within the user’s calendar. The caller may ask to schedule a meeting, the digital executive assistant would access the user’s calendar to determine availability. If calendar indicates availability, a ‘tentative’ meeting will be entered. The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
Allow recall of ‘generally available’ information. If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of generally available information, and provide it. The Smartphone user may use applications like Google Keep, and any notes tagged with a label “Open Access” may be accessible to any caller.
Join the smartphone user’s social network, such as LinkedIn. If the caller knows the phone number of the person but is unable to find the user through the social network directory, an invite may be requested by the caller.
Custom business workflows may also be triggered by the smartphone, such as “Pay by Phone”.
The Digital Executive Assistant capabilities:
Able to gain control of your Smartphone’s incoming phone calls
Able to interact with the 3rd party, dial in caller, on a set of business dialog workflows defined by you, the executive.
It seems that car manufacturers, among others, are building autonomous hardware (i.e. vehicle and other sensors) as well as the software to govern their usage. Few companies are separating the hardware and software layers to explicitly carve out the autonomous software, for example.
Yes, there are benefits to tightly couple the autonomous hardware and software:
1. Proprietary implementations and intellectual property – Implementing autonomous vehicles within a single corporate entity may ‘fast track’ patents, and mitigate NDA challenges / risks
2. Synergies with two (or more) teams working in unison to implement functional goals. However, this may also be accomplished through two organizations with tightly coupled teams. Engaged, strong team leadership to help eliminate corp to corp BLOCKERS, must be in place to ensure deliverables.
There are also advantages with two separate organizations, one the software layer, and the other, the vehicle hardware implementation, i.e. sensors
1. Implementation of Autonomous Vehicle Hardware from AI Software enables multiple, strong alternate corporate perspectives These perspectives allow for a stronger, yet balanced approach to implementation.
2. The AI Software for Autonomous vehicles, if contractually allowed, may work with multiple brand vehicles, implementing similar capabilities. Vehicles now have capabilities / innovations shared across the car industry. The AI Software may even become a standard in implementing Autonomous vehicles across the industry.
3. Working with multiple hardware / vehicle manufactures may allow the enablement of Software APIs, layer of implementation abstraction. These APIs may enable similar approaches to implementation, and reduce redundancy and work can be used as ‘the gold standard’ in the industry.
4. We see commercial adoption of autonomous vehicle features such as “Auto Lane Change”, and “Automatic Emergency Braking.” so it makes sense to adopt standards through 3rd Party AI software Integrators / Vendors
5. Incorporating Checks and Balances to instill quality into the product and the process that governs it.
In summation, Car parts are typically not built in one geographic location, but through a global collaboration. Autonomous software for vehicles should be externalized in order to overcome unbiased safety and security requirements. A standards organization “with teeth” could orchestrate input from the industry, and collectively devise “best practices” for autonomous vehicles.
Do AI Rules Engines “deliberate” any differently between rules with moral weight over none at all. Rhetorical..?
The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and “business” input. In addition, implementations may vary between manufacturers and countries.
In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role? Vehicle Insurance companies? Car insurance will be rewritten when it comes to autonomous cars. Some cars may have a higher deductible or the cost of the policy may rise based upon the autonomous implementation.
Conditions Under Consideration:
1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?
1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?
1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?
The more sensors, the more input to the decision process.