Tag Archives: AI

Applying Artificial Intelligence & Machine Learning to Data Warehousing

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos.   Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight.  In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls.  Architecture  also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Key Features of iGuard:
  • Policy engine prevents “bad” queries before reaching database
  • Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
  • Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
  • Create policies using policy templates based on SQL Syntax:
    • Require JOIN to Security Table
    • Column Combination Restriction –  Ex. Prevents combining customer name and social security #
    • Table JOIN restriction –  Ex. Prevents joining two different tables in same query
    • Equi-literal Compare requirement – Tightly Constrains Query Ex. Prevents hunting for sensitive data by requiring ‘=‘ condition
    • DDL/DCL restrictions (Create, Alter, Drop, Grant)
    • DQL/DML restrictions (Select, Insert, Update, Delete)
Data Access Policies

Blocks access to sensitive database objects

  • By user or user groups and time of day (shift) (e.g. ETL)
    • Schemas
    • Tables/Views
    • Columns
    • Rows
    • Stored Procs/Functions
    • Packages (Oracle)
Connection Policies

Blocks connections to the database

  • White list or black list by
    • DB User Logins
    • OS User Logins
    • Applications (BI, Query Apps)
    • IP addresses
Rule Templates Contain Customizable Messages

Each of the “Policy Templates”  has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging
iGuard Rules Messaging

 

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics.   The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process.  New rules will be suggested which exceed these defined parameters.  The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

Finally, here is a high level overview of the implementation architecture of iGuard.  For sales or pre-sales technical questions, please contact www.teleran.com

Teleran Logical Architecture
Teleran Logical Architecture

 

Currently Featured Clients
Teleran Featured Clients
Teleran Featured Clients

 

Google Search Enables Users to Upload Images for Searching with Visual Recognition. Yahoo and Bing…Not Yet

The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties.  The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types.  For example,  determine if a person [object] is “Joyful” or “Angry”.

As of the writing of this article,  search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results.   Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web.    From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item.  From there, an additional icon appears, a camera with the hint text “Search by Image”.  Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.

Google Search Upload Images
Google Search Upload Images

Select the “Upload an Image” tab, choose a file, and upload.  I used a fictional character, Max Headroom.   The search results were very good (see below).   I also attempted an uncommon shape, and it did not meet my expectations.   The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model.  If the shape is “Unique” the Google Search Image Engine did it’s job.

Google Image Search Results – Max Headroom
Max Headroom Google Search Results
Max Headroom Google Search Results

 

Google Image Search Results – Odd Shaped Metal Object
Google Search Results - Odd Shaped Metal Object
Google Search Results – Odd Shaped Metal Object

The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good.  However I would have liked to see better matches under the “Visually Similar Image” section.  Again, this is probably due to the image classification process, and potentially the diversity of image samples.

A Few Questions for Google

How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested?  How are new images incorporated into the Classifier model?  Are the user uploaded images now included in the Model (after model training is run again)?    Is Google Search Image incorporating ALL Internet images into Classifier Model(s)?  Is an alternate AI Image Recognition process used beyond Classifier Models?

Behind the Scenes

In addition, Google has provided a Cloud Vision API as part of their Google Cloud Platform.

I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting.  After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image.  I tried a number of samples, including my odd shaped metal, and I uploaded the image.  I think it performed fairly well on the “labels” (i.e. image attributes)

Odd Shaped Metal Sample Image
Odd Shaped Metal Sample Image

Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results.  In contrast, using Google’s Search Image Engine produced some “similar” web results.

Odd Shaped Metal Sample Image Web Results
Odd Shaped Metal Sample Image Web Results

Finally, I tested the Google Cloud Vision API with a self portrait image.  THIS was so cool.

Google Vision API - Face Attributes
Google Vision API – Face Attributes

The API brought back several image attributes specific to “Faces”.  It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.

Google Vision API - Labels
Google Vision API – Labels

The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.

Google Vision API - Web
Google Vision API – Web

Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.

The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %.  Of course,  I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)

Google Image Search Engine: Ian Face Image
Google Image Search Engine: Ian Face Image

 

Smartphone AI Digital Assistant Encroaching on the Virtual Receptionist

Businesses already exist which have developed and sell Virtual Receptionist , that handle many caller needs (e.g. call routing).

However, AI Digital Assistants such as Alexa, Cortana, Google Now, and Siri have an opportunity to stretch their capabilities even further.  Leveraging technologies such as Natural language processing (NLP) and Speech recognition (SR), as well as APIs into the Smartphone’s OS answer/calling capabilities, functionality can be expanded to include:

  • Call Screening –  The digital assistant asks for the name of the caller,  purpose of the call, and if the matter is “Urgent
    • A generic “purpose” response, or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
    • The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
    • The smartphone user may decide to accept the call, or reject the call and send the caller to voice mail.
  • Call / Digital Assistant Capabilities
    • The digital assistant may schedule a ‘tentative’ appointment within the user’s calendar.  The caller may ask to schedule a meeting, the digital assistant would access the user’s  calendar to determine availability.  If calendar indicates availability, a ‘tentative’ meeting will be entered.  The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
    • Allow recall of ‘generally available’ information.  If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of generally available information, and provide it.  The Smartphone user may use applications like Google Keep, and any note tagged with a label “Open Access” may be accessible to any caller.
    • Join the smartphone user’s social network, such as LinkedIn. If the caller knows the phone number of the person, but is unable to find the user through the social network directory, an invite may be requested by the caller.
    • Custom business workflows may also be triggered through the smartphone, such as “Pay by Phone”.

Autonomous Software Layer for Vehicles through 3rd Party Integrators / Vendors

It seems that car manufacturers, among others, are building autonomous hardware (i.e. vehicle and other sensors) as well as the software to govern their usage.  Few companies are separating the hardware and software layers to explicitly carve out the autonomous software, for example.

Yes, there are benefits to tightly couple the autonomous hardware and software:

1. Proprietary implementations and intellectual property – Implementing autonomous vehicles within a single corporate entity may ‘fast track’ patents, and mitigate NDA challenges / risks

2. Synergies with two (or more) teams working in unison to implement functional goals.  However, this may also be accomplished through two organizations with tightly coupled teams.   Engaged, strong team leadership to help eliminate corp to corp BLOCKERS, must be in place to ensure deliverables.

There are also advantages with two separate organizations, one the software layer, and the other, the vehicle hardware implementation, i.e. sensors

1. Implementation of Autonomous Vehicle Hardware from AI Software enables multiple, strong alternate corporate perspectives These perspectives allow for a stronger, yet balanced approach to implementation.

2.  The AI Software for Autonomous vehicles, if contractually allowed, may work with multiple brand vehicles, implementing similar capabilities.  Vehicles now have capabilities / innovations shared across the car industry.  The AI Software may even become a standard in implementing Autonomous vehicles across the industry.

3. Working with multiple hardware / vehicle manufactures may allow the enablement of Software APIs, layer of implementation abstraction.  These APIs may enable similar approaches to implementation, and reduce redundancy and work can be used as ‘the gold standard’ in the industry.

4. We see commercial adoption of autonomous vehicle features such as “Auto Lane Change”, and “Automatic Emergency Braking.” so it makes sense to adopt standards through 3rd Party AI software Integrators / Vendors

5. Incorporating Checks and Balances to instill quality into the product and the process that governs it.

In summation, Car parts are typically not built in one geographic location, but through a global collaboration.  Autonomous software for vehicles should be externalized in order to overcome unbiased safety and security requirements.  A standards organization “with teeth” could orchestrate input from the industry, and collectively devise “best practices” for autonomous vehicles.

Uncommon Opportunity? R&D Conversational AI Engineer

I had to share this opportunity.  The Conversational AI Engineer role will continue to be in demand for some time.


Title: R&D Conversational AI Engineer
Location: Englewood Cliffs, NJ
Duration: 6+ months Contract(with Possible extension)

Responsibilities:

  • Create Alexa Skills, Google Home Actions, and chatbots for various direct Client’s brands and initiatives.
  • Work with the Digital Enterprises group to create production-ready conversational agents to help Client emerge in the connected life space.
  • Create additional add-ons to the conversational agents
  • Work with new technologies not be fully documented yet
  • Work with startups and their technology emerging in the connected life space.

Quals–
Client is looking for a developer in conversational AI and bot development.

What is Media Labs?   Media Labs is dedicated to driving a collaborative culture of innovation across all of Clients . We serve as an internal incubator and accelerator for emerging technology and are leading the way with fresh ideas to ignite the future of media and storytelling. We are committed to partnering with another telecom giant, startups, research and academic groups, content creators and brands to further innovation at client. One of our main themes is connected life and we are looking for an engineer to lead this development.

Requirements for R&D Engineer: –

  • Bachelor in Computer Science, Engineering, or other related field
  • Experience working with new technologies that may not be fully documented yet
  • Experience communicating technology to non-technical people
  • Experience with AWS (Lambda, CloudWatch, S3, API Gateway, etc)
  • Experience with JavaScript, Node.js
  • Some experience creating Alexa Skills, Google Home Actions, or chatbots

Optional Requirements:

  • Experience creating iOS or Android applications (native or non-native)
  •  Experience with API.AI or another NLP engine (Lex, Watson Conversation)

Amazon’s Alexa vs. Google’s Assistant: Same Questions, Different Answers

Excellent article by  .

Amazon’s Echo and Google’s Home are the two most compelling products in the new smart-speaker market. It’s a fascinating space to watch, for it is of substantial strategic importance to both companies as well as several more that will enter the fray soon. Why is this? Whatever device you outfit your home with will influence many downstream purchasing decisions, from automation hardware to digital media and even to where you order dog food. Because of this strategic importance, the leading players are investing vast amounts of money to make their product the market leader.

These devices have a broad range of functionality, most of which is not discussed in this article. As such, it is a review not of the devices overall, but rather simply their function as answer engines. You can, on a whim, ask them almost any question and they will try to answer it. I have both devices on my desk, and almost immediately I noticed something very puzzling: They often give different answers to the same questions. Not opinion questions, you understand, but factual questions, the kinds of things you would expect them to be in full agreement on, such as the number of seconds in a year.

How can this be? Assuming they correctly understand the words in the question, how can they give different answers to the same straightforward questions? Upon inspection, it turns out there are ten reasons, each of which reveals an inherent limitation of artificial intelligence as we currently know it…


Addendum to the Article:

As someone who has worked with Artificial Intelligence in some shape or form for the last 20 years, I’d like to throw in my commentary on the article.

  1. Human Utterances and their Correlation to Goal / Intent Recognition.  There are innumerable ways to ask for something you want.  The ‘ask’ is a ‘human utterance’ which should trigger the ‘goal / intent’ of what knowledge the person is requesting.  AI Chat Bots, digital agents, have a table of these utterances which all roll up to a single goal.  Hundreds of utterances may be supplied per goal.  In fact, Amazon has a service, Mechanical Turk, the Artificial Artificial Intelligence, which you may “Ask workers to complete HITs – Human Intelligence Tasks – and get results using Mechanical Turk”.   They boast access to a global, on-demand, 24 x 7 workforce to get thousands of HITs completed in minutes.  There are also ways in which the AI Digital Agent may ‘rephrase’ what the AI considers utterances that are closely related.  Companies like IBM look toward human recognition, accuracy of comprehension as 95% of the words in a given conversation.  On March 7, IBM announced it had become the first to hone in on that benchmark, having achieved a 5.5% error rate.
  2. Algorithmic ‘weighted’ Selection verses Curated Content.   It makes sense based on how these two companies ‘grew up’, that Amazon relies on their curated content acquisitions such as Evi,  a technology company which specialises in knowledge base and semantic search engine software. Its first product was an answer engine that aimed to directly answer questions on any subject posed in plain English text, which is accomplished using a database of discrete facts.   “Google, on the other hand, pulls many of its answers straight from the web. In fact, you know how sometimes you do a search in Google and the answer comes up in snippet form at the top of the results? Well, often Google Assistant simply reads those answers.”  Truncated answers equate to incorrect answers.
  3. Instead of a direct Q&A style approach, where a human utterance, question, triggers an intent/goal , a process by which ‘clarifying questions‘ maybe asked by the AI digital agent.  A dialog workflow may disambiguate the goal by narrowing down what the user is looking for.  This disambiguation process is a part of common technique in human interaction, and is represented in a workflow diagram with logic decision paths. It seems this technique may require human guidance, and prone to bias, error and additional overhead for content curation.
  4. Who are the content curators for knowledge, providing ‘factual’ answers, and/or opinions?  Are curators ‘self proclaimed’ Subject Matter Experts (SMEs), people entitled with degrees in History?  or IT / business analysts making the content decisions?
  5. Questions requesting opinionated information may vary greatly between AI platform, and between questions within the same AI knowledge base.  Opinions may offend, be intentionally biased, sour the AI / human experience.

Evaluating fobi.io Chatbot Powered By Google Forms: AI Digital Agent?

Interesting approach to an AI Chatbot implementation.  The business process owner creates one or more Google Forms containing questions and answers, and converts/deploys to a chatbot using fobi.io.  All the questions for [potential] customers/users are captured in a multitude of forms.  Without any code, and within minutes, an interactive chatbot can be produced and deployed for client use.

The trade off for rapid deployment and without coding is a rigid approach of triggering user desired “Goal/Intents”.  It seems a single goal/intent is mapped to a single Google Form.  As opposed to a digital agent, which leverages utterances to trigger the user’s intended goal/intent.  Before starting the chat, the user must select the appropriate Google Form, with the guidance of the content curator.

Another trade off is, it seems, no integration on the backend to execute a business process, essential to many chatbot workflows. For example, given an Invoice ID, the chatbot may search in a transactional database, then retrieve and display the full invoice.  Actually, I may be incorrect. On the Google Forms side, there is a Script Editor. Seems powerful and scary all at the same time.

Another trade off that seems to exist, more on the Google Forms side, is building not just a Form with a list of Questions, but a Consumer Process Workflow, that allows the business to provide an interactive dialog based on answers users provide.  For example, a Yes/No or multichoice answer may lead to alternate sets of questions [and actions].  It doesn’t appear there is any workflow tool provided to structure the Google Forms / fobi.io chatbot Q&A.

However, there are still many business cases for the product, especially for small to mid size organizations.

* Business Estimates – although there is no logic workflow to guide the Q&A sessions with [prospective] customers, the business still may derive the initial information they require to make an initial assessment.  It seems a Web form, and this fobi.io / Google Forms solution seems very comparable in capability, its just a change in the median in which the user interacts to collect the information.

One additional note, Google Forms is not a free product.  Looks like it’s a part of the G Suite. Free two week trial, then the basic plan is $5 per month, which comes with other products as well.  Click here for pricing details.

Although this “chatbot” tries to quickly provide a mechanism to turn a form to a chatbot, it seems it’s still just a form at the end of the day.  I’m interested to see more products from Zoi.ai soon

Evaluating Amazon Lex – AI Digital Agent / Assistant Implementation

Evaluating AI chatbot solutions for:

  • Simple to Configure – e.g. Wizard Walkthrough
  • Flexible, and Mature Platform e.g. Executing backend processes
  • Cost Effective and Competitive Solutions
  • Rapid Deployment to XYZ platforms

The idea is almost anyone can build and deploy a chat bot for your business, small to midsize organizations.

Amazon Lex

Going through the Amazon Lex build chat process, and configuration of the Digital Assistant was a breeze.  AWS employs a ‘wizard’ style interface to help the user build the Chatbot / Digital Agent.  The wizard guides you through defining Intents, Utterances, Slots, and Fulfillment.

  • Intents – A particular goal that the user wants to achieve (e.g. book an airline reservation)
  •  Utterances – Spoken or typed phrases that invoke your intent
  • Slots – Data the user must provide to fulfill the intent
  • Prompts – Questions that ask the user to input data
  • Fulfillment – The business logic required to fulfill the user’s intent (i.e. backend call to another system, e.g. SAP)
Amazon Lex Chabot
Amazon Lex Chabot

The Amazon Lex Chatbot editor is also extremely easy to use, and to update / republish any changes.

Amazon Chat Bot Editor
Amazon Chat Bot Editor

The challenge with Amazon Lex appears to be a very limiting ability for chatbot distribution / deployment.  Your Amazon Lex Chatbot is required to use one of three methods to deploy: Facebook, Slack, or Twilio SMS.  Facebook is limiting in a sense if you do not want to engage your customers on this platform.   Slack is a ‘closed’ framework, whereby the user of the chat bot must belong to a Slack team in order to communicate.  Finally, Twilio SMS implies use of your chat bot though a mobile phone SMS.

Amazon Chatbot Channels
Amazon Chatbot Channels

 

I’ve reached out to AWS Support regarding any other options for Amazon Lex chatbot deployment.  Just in case I missed something.

Amazon Chatbot Support
Amazon Chatbot Support

There is a “Test Bot” in the lower right corner of the Amazon Lex, Intents menu.  The author of the business process can, in real-time, make changes to the bot, and test them all on the same page.

Amazon Chatbot, Test Bot
Amazon Chatbot, Test Bot

 

Key Followups

  • Is there a way to leverage the “Test Bot” as a “no frills” Chatbot UI,  and embed it in an existing web page?  Question to AWS Support.
  • One concern is for large volumes of utterances / Intents and slots. An ideal suggestion would allow the user a bulk upload through an Excel spreadsheet, for example.
  • I’ve not been able to utilize the Amazon Lambda to trigger server side processing.
  • Note: there seem to be several ‘quirky’ bugs in the Amazon Lex UI, so it may take one or two tries to workaround the UI issue.

IBM Watson Conversation also contends for this Digital Agent / Assistant space, and have a very interesting offering including dialog / workflow definition.

Both Amazon Lex and IBM Watson Conversation are FREE to try, and in minutes, you could have your bots created and deployed. Please see sites for pricing details.

Beyond Google Search of Personal Data – Proactive, AI Digital Assistant 

As per previous Post, Google Searches Your Personal Data (Calendar, Gmail, Photos), and Produces Consolidated Results, why can’t the Google Assistant take advantage of the same data sources?

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products.  The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step,  Google’s “Personal”  Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data.  No doubt other Google products are coming soon.

Big benefits are not just for the consumer to  search through their Personal Goggle data, but provide that consolidated view to the AI Assistant.  Does the Google [Digital] Assistant already have access to Google Keep data, for example.  Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage?  If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

Note: The “Google App” for mobile platforms does:

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW).  Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel.  However, the execution of goal based rules would need to be managed.  The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

Bass Fishing Trip
Bass Fishing Trip

For example,

  • The proactive Google Assistant may suggest to book your annual fishing trip soon.  Elevated Interaction to Consumer / User.
  • The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search.   Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category.  Building workflows that are ‘predefined’ for a core set of goals/rules.
  • AI Assistant may search user’s photo archive on the server side.   Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
  • Photos from around ‘August’ may be earmarked for Assistant use
  • Photos may be geo tagged,  e.g. Lake Champlain, which is known for its fishing.
  •  All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository,  goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
  • In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process.  Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” .  Or, food may not be the best place to start, i.e. Amazon Dash

 

Kosher ‘Like’ Certifications and Oversight of Autonomous Vehicle Implementations

Do AI Rules Engines “deliberate” any differently between rules with moral weight over none at all. Rhetorical..?

The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and “business” input. In addition, implementations may vary between manufacturers and countries.

In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role?  Vehicle Insurance companies?  Car insurance will be rewritten when it comes to autonomous cars.  Some cars may have a  higher deductible or the cost of the policy may rise based upon the autonomous implementation.

Conditions Under Consideration:

1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?

1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?

1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?

The more sensors, the more input to the decision process.