Category Archives: Business

People Turn Toward “Data Banks” to Commoditize on their Purchase and User Behavior Profiles

Anyone who is anti “Big Brother”, this may not be the article for you, in fact, skip it. ūüôā

 

The Pendulum Swings Away from GDPR

In the not so distant future, “Data Bank” companies consisting of¬†Subject Matter Experts¬†(SME) across all verticals, ¬†may process¬†your data¬†feeds collected from your purchase and user behavior profiles.¬† Consumers will be encouraged to submit their data profiles into a Data Bank who will offer incentives such as a reduction of¬†insurance premiums to cash back rewards.

 

Everything from activity trackers, home¬†automation, to¬†vehicular automation¬†data may be captured and aggregated. ¬† ¬†The data collected can then be sliced and diced to provide macro and¬†micro views of the information. ¬† ¬†On the abstract, macro level the¬†information¬†may allow for demographic, statistical correlations, which may¬†contribute to corporate strategy. On a¬†granular¬†view, the data¬†will provide “data banks” the opportunity to sift through data to perform analysis and correlations that lead to actionable information.

 

Is it secure?  Do you care if a hacker steals your weight loss information? May not be an issue if collected Purchase and Use Behavior Profiles aggregate into a Blockchain general ledger.  Data Curators and Aggregators work with SMEs to correlate the data into:

  • Canned, ‘intelligent’ reports targeted for a specific subject matter, or across silos of¬†data types
  • ‘Universes’ (i.e. ¬†Business Objects) of data that may be ‘mined’ by consumer approved, ‘trusted’ third party companies, e.g. your insurance companies.
  • Actionable information based on AI subject matter rules engines and consumer rule transparency may be provided.

 

¬†“Data Banks” may be required to report to their customers who agreed to sell their data examples of specific rows of the data, which was sold on a “Data Market”.

Consumers may have¬†the option of sharing their personal¬†data with specific companies by proxy, through a ‘data bank’¬†granular to the data point¬†collected.¬† Sharing of Purchase and User Behavior Profiles:

  1. may lower [or raise] your insurance premiums
  2. provide discounts on preventive health care products and services, e.g. vitamins to yoga classes
  3. Targeted, affordable,  medicine that may redirect the choice of the doctor to an alternate.  The MD would be contacted to validate the alternate.

 

The curriated data collected may be harnessed by thousands of affinity groups to offer very discrete products and services.  Purchase and User Behavior Profiles,  correlated information stretches beyond any consumer relationship experienced today.

 

At some point, health insurance companies may require you to wear a tracker to increase or slash premiums.  Auto Insurance companies may offer discounts for access to car smart data to make sure suggested maintenance guidelines for service are met.

 

You may approve your “data bank”¬†to give access¬†to specific soliciting government agencies or private firms looking to analyze data for their studies. You may qualify based on the demographic, abstracted data points collected for incentives provided may be tax credits, or paying studies.

Purchase and User Behavior Profiles:  Adoption and Affordability

If ‘Data Banks’ are allowed to collect Internet of Things (IoT)¬†device profile and the devices themselves are cost prohibitive. ¬†here are a few¬†ways to increase their adoption:

  1.  [US] tax coupons to enable the buyer, at the time of purchase, to save money.  For example, a 100 USD discount applied at the time of purchase of an Activity Tracker, with the stipulation that you may agree,  at some point, to participate in a study.
  2. Government subsidies: the cost of aggregating and archiving Purchase and Behavioral profiles through annual tax deductions.  Today, tax incentives may allow you to purchase an IoT device if the cost is an itemized medical tax deduction, such as an Activity Tracker that monitors your heart rate, if your medical condition requires it.
  3. Auto, Life, Homeowners, and Health policyholders may qualify for additional insurance deductions
  4. Affinity branded IoT devices, such as American Lung Association may sell a logo branded Activity Tracker.  People may sponsor the owner of the tracking pedometer to raise funds for the cause.

The World Bank has a repository of data, World DataBank, which seems to store a large depth of information:

World Bank Open Data: free and open access to data about development in countries around the globe.”

Here is the article that inspired me to write this article:

http://www.marketwatch.com/story/you-might-be-wearing-a-health-tracker-at-work-one-day-2015-03-11

 

Privacy and Data Protection Creates Data Markets

Initiatives such as¬†General Data Protection Regulation (GDPR) and other privacy initiatives which seek to constrict access to your data to you as the “owner”, as a byproduct, create opportunities for you to¬†sell your data.¬†¬†

 

Blockchain: Purchase, and User Behavior Profiles

As your “vault”, “Data Banks” will collect and maintain your two primary datasets:

  1. As a consumer of goods and services, a Purchase Profile is established and evolves over time.¬† Online purchases are automatically collected, curated, appended with metadata, and stored in a data vault [Blockchain].¬† “Offline” purchases at some point, may become a hybrid [on/off] line purchase, with advances in traditional monetary exchanges, and would follow the online transaction model.
  2. User Behavior (UB)¬† profiles, both on and offline will be collected and stored for analytical purposes.¬† A user behavior “session” is a use case of activity where YOU are the prime actor.¬† Each session would create a single UB transaction and are also stored in a “Data Vault”.¬† ¬†UB use cases may not lead to any purchases.

Not all Purchase and User Behavior profiles are created equal.¬† Eg. One person’s profile may show a monthly spend higher than another.¬† The consumer who purchases more may be entitled to more benefits.

These datasets wholly owned by the consumer, are safely stored, propagated, and immutable with a solution such as with a Blockchain general ledger.

Popular Tweets from January and February 2018

Tweet Activity Analytics

Leveraging Twitter’s Analytics, I’ve extracted the Top Tweets from the last¬†57 day period (Jan 1 until today).¬† ¬†During that period, there were 46.8K impressions earned.

Summary:

  • 61 Link Clicks
  • 27 Retweets
  • 86 Likes
  • 34 Replies
Top Tweets for January and February 2018
Top Tweets for January and February 2018

Microsoft Productivity Suite – Content Creation, Ingestion, Curation, Search, and Repurpose

Auto Curation: AI Rules Engine Processing

There are, of course, 3rd party platforms that perform very well, are feature rich, and agnostic to all file types.  For example, within a very short period of time, low cost, and possibly a few plugins, a WordPress site can be configured and deployed to suit your needs of Digital Asset Managment (DAM).  The long-term goal is to incorporate techniques such as Auto Curation to any/all files, leveraging an ever-growing intelligent taxonomy, a taxonomy built on user-defined labels/tags, as well an AI rules engine with ML techniques.   OneDrive, as a cloud storage platform, may bridge the gap between JUST cloud storage and a DAM.

Ingestion and Curation Workflow

Content Creation Apps and Auto Curation

  • The ability for Content Creation applications, such as Microsoft Word, to capture not only the user-defined tags but also the context of the tags relating to the content.
    • When ingesting a Microsoft PowerPoint presentation, after consuming the file, and Auto Curation process can extract “reusable components” of the file, such as¬†slide¬†header/name, and the correlated content such as a table, chart, or graphics.
    • Ingesting Microsoft Excel and Auto Curation of Workbooks may yield “reusable components” stored as metadata tags, and their correlated content, such as chart and table names.
    • Ingesting and Auto Curation of Microsoft Word documents may build a classic Index for all the most frequently occurring words, and augment the manually user-defined tags in the file.
    • Ingestion of Photos [and Videos] into and Intelligent Cloud Storage Platform, during the Auto Curation process, may identify commonly identifiable objects, such as trees or people.¬† These objects would be automatically tagged through the Auto Curation process after Ingestion.
  • Ability to extract the content file metadata, objects and text tags, to be stored in a standard format to be extracted by DAMs, or Intelligent Cloud Storage Platforms with file and metadata search capabilities.¬† Could OneDrive be that intelligent platform?
  • A user can search for a¬†file title or throughout the Manual and Auto Curated, defined metadata¬†associated with the file.¬† The¬†DAM or Intelligent Cloud Storage Platform provides both search results.¬† ¬†“Reusable components” of files are also searchable.¬†
    • For “Reusable Components” to be parsed out of the files to be separate entities, a process needs to occur after Ingestion Auto Curration.
  • Content Creation application, user-entry tag/text fields should have “drop-down” access to the search index populated with auto/manual created tags.

Auto Curation and Intelligent Cloud Storage

  • The intelligence of Auto Curation should be built into the Cloud Storage Platform, e.g. potentially OneDrive.
  • At a minimum, auto curation should update the cloud storage platform indexing engine to correlate files and metadata.
  • Auto Curation is the ‘secret sauce’ that “digests” the content to build the search engine index, which contains identified objects (e.g. tag and text or¬†coordinates)¬† automatically
    • Auto Curation may leverage a rules engine (AI) and apply user configurable rules such as “keyword density” thresholds
    • Artificial Intelligence, Machine Learning rules may be applied to the content to derive additional¬†labels/tags.
  • If leveraging version control of the intelligent cloud storage platform, each iteration should “re-index” the content, and update the Auto Curation metadata tags.¬† User-created tags are untouched.
  • If no user-defined labels/tags exist, upon ingestion, the user may be prompted for tags

Auto Curation and “3rd Party” Sources

In the context of sources such as a Twitter feed, there exists no incorporation of feeds into an Intelligent Cloud Storage.  OneDrive, Cloud Intelligent Storage may import feeds from 3rd party sources, and each Tweet would be defined as an object which is searchable along with its metadata (e.g. likes; tags).

Operating System, Intelligent Cloud Storage/DAM

The Intelligent Cloud Storage and DAM solutions should have integrated search capabilities, so on the OS (mobile or desktop) level, the discovery of content through the OS search of tagged metadata is possible.

Current State

  1. OneDrive has no ability to search Microsoft Word tags
  2. The UI for all Productivity Tools must have a comprehensive and simple design for leveraging an existing taxonomy for manual tagging, and the ability to add hints for auto curation
    1. Currently, Microsoft Word has two fields to collect metadata about the file.¬† It’s obscurely found at the “Save As” dialog.
      1. The “Save As”¬†dialogue box allows a user to add tags and authors but only when using the MS Word desktop version.¬† The Online (Cloud) version of Word has no such option when saving to Microsoft OneDrive Cloud Storage
  3. Auto Curation (Artificial Intelligence, AI) must inspect the MS Productivity suite tools, and extract tags automatically which does not exist today.
  4. No manual taging or Auto Curation/Facial Recognition exists.

Politics around Privacy: Implementing Facial and Object Recognition

This Article is Not…

about deconstructing existing functionality of entire Photo Archive and Sharing platforms.

It is…

to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.

Backstory: The Asks / Needs

Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive,  Google Photos,  or iCloud.  A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat,  and/or Twitter).

Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months.  The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids.  In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.

Social Networking Platform Facebook

As far as I can remember the Social Networking platform¬†Facebook has¬†had the¬†ability to¬†tag¬†faces¬†in¬†photos¬†uploaded to the¬†platform.¬† There are restrictions, such as whom¬†you can¬†tag from the privacy side, but¬†the¬†capability¬†still exists. The¬†Facebook¬†platform¬†also¬†automatically identifies¬†faces within photos, i.e. places a box¬†around¬†faces¬†in¬†a photo to¬†make the¬†person¬†tagging¬†capability¬†easier.¬† So, in essence, there¬†is an¬†“intelligent¬†capability” to¬†identify¬†faces in a photo.¬† It seems¬†like the¬†Facebook¬†platform¬†allows¬†you¬†to see¬†“Photos of¬†You”,¬† but¬†what seems to be¬†missing¬†is¬†to¬†search for¬†all¬†photos¬†of¬†Fred¬†Smith, a friend of yours, even if all his photos are public.¬† ¬† By design, it sounds¬†fit for the¬†purpose of the¬†networking platform.

Auto Curation

  1. Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
  2. The Auto Curation process scans photos for:
    1. “Commonly Identifiable Objects”, such as #Car, #Clock,¬† #Fireworks, and¬†#People
    2. Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
    3. Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn”¬† faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
  3. Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.

Manual Curation

After¬†upload,¬† and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags.¬†¬†Using the photo curator function on the Cloud Storage Service can¬†tag¬†any¬†“objects” in the¬†photo¬†using Rectangle or¬†Lasso¬†Select.

Curation to Take Action

Once photo libraries are curated, the library owner(s) can:

  • Automatically build albums based one or more #tags
  • Smart Albums automatically update, e.g.¬† after ingestion and Auto Curation.¬† Albums are tag sensitive and update with new pics that contain certain people or objects.¬† The user/ librarian may dictate logic for tags.

Where is this Functionality??

Why are may major companies not implementing facial (and object) recognition?  Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.

Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly?  Do privacy concerns at the moment, leave others to become trailblazers in this area?

Applying Artificial Intelligence & Machine Learning to Data Warehousing

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos.¬†¬† Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight. ¬†In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls. ¬†Architecture ¬†also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Key Features of iGuard:
  • Policy engine prevents “bad” queries before reaching database
  • Patented rule engine resides in-memory to evaluate queries at database protocol layer on TCP/IP network
  • Patented rule engine prevents inappropriate or long-running queries from reaching the data
70 Customizable Policy Templates
SQL Query Policies
  • Create policies using policy templates based on SQL Syntax:
    • Require JOIN to Security Table
    • Column Combination Restriction – ¬†Ex. Prevents combining customer name and social security #
    • Table JOIN restriction – ¬†Ex.¬†Prevents joining two different tables in same query
    • Equi-literal Compare requirement – Tightly Constrains Query Ex.¬†Prevents hunting for sensitive data by requiring ‚Äė=‚Äė condition
    • DDL/DCL restrictions (Create, Alter, Drop, Grant)
    • DQL/DML restrictions (Select, Insert, Update, Delete)
Data Access Policies

Blocks access to sensitive database objects

  • By user or user groups and time of day (shift) (e.g. ETL)
    • Schemas
    • Tables/Views
    • Columns
    • Rows
    • Stored Procs/Functions
    • Packages (Oracle)
Connection Policies

Blocks connections to the database

  • White list or black list by
    • DB User Logins
    • OS User Logins
    • Applications (BI, Query Apps)
    • IP addresses
Rule Templates Contain Customizable Messages

Each of the “Policy Templates” ¬†has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging
iGuard Rules Messaging

 

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics. ¬† The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process. ¬†New rules will be suggested which exceed these defined parameters. ¬†The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

Finally, here is a high level overview of the implementation architecture of iGuard.  For sales or pre-sales technical questions, please contact www.teleran.com

Teleran Logical Architecture
Teleran Logical Architecture

 

Currently Featured Clients
Teleran Featured Clients
Teleran Featured Clients

 

Google Search Enables Users to Upload Images for Searching with Visual Recognition. Yahoo and Bing…Not Yet

The ultimate goal, in my mind, is to have the capability within a Search Engine to be able to upload an image, then the search engine analyzes the image, and finds comparable images within some degree of variation, as dictated in the search properties. ¬†The search engine may also derive metadata from the uploaded image such as attributes specific to the image object(s) types. ¬†For example, ¬†determine if a person [object] is “Joyful” or “Angry”.

As of the writing of this article, ¬†search engines Yahoo and Microsoft Bing do not have the capability to upload an image and perform image/pattern recognition, and return results. ¬† Behold, Google’s search engine has the ability to use some type of pattern matching, and find instances of your image across the world wide web. ¬† ¬†From the Google Search “home page”, select “Images”, or after a text search, select the “Images” menu item. ¬†From there, an additional icon appears, a camera with the hint text “Search by Image”. ¬†Select the Camera icon, and you are presented with options on how Google can acquire your image, e.g. upload, or an image URL.

Google Search Upload Images
Google Search Upload Images

Select the “Upload an Image” tab, choose a file, and upload. ¬†I used a fictional character,¬†Max Headroom. ¬† The search results were very good (see below). ¬† I also attempted an uncommon shape, and it did not meet my expectations. ¬† The poor performance of matching this possibly “unique” shape is mostly likely due to how the Google Image Classifier Model was defined, and correlating training data that tested the classifier model. ¬†If the shape is “Unique” the Google Search Image Engine did it’s job.

Google Image Search Results –¬†Max Headroom
Max Headroom Google Search Results
Max Headroom Google Search Results

 

Google Image Search Results –¬†Odd Shaped Metal Object
Google Search Results - Odd Shaped Metal Object
Google Search Results – Odd Shaped Metal Object

The Google Search Image Engine was able to “Classify” the image as “metal”, so that’s good. ¬†However I would have liked to see better matches under the “Visually Similar Image” section. ¬†Again, this is probably due to the image classification process, and potentially the diversity of image samples.

A Few Questions for Google

How often is the Classifier Modeling process executed (i.e. training the classifier), and the model tested?  How are new images incorporated into the Classifier model?  Are the user uploaded images now included in the Model (after model training is run again)?    Is Google Search Image incorporating ALL Internet images into Classifier Model(s)?  Is an alternate AI Image Recognition process used beyond Classifier Models?

Behind the Scenes

In addition, Google has provided a Cloud Vision API as part of their Google Cloud Platform.

I’m not sure if the Cloud Vision API uses the same technology as Google’s Search Image Engine, but it’s worth noting. ¬†After reaching the Cloud Vision API starting page, go to the “Try the API” section, and upload your image. ¬†I tried a number of samples, including my odd shaped metal, and I uploaded the image. ¬†I think it performed fairly well on the “labels” (i.e. image attributes)

Odd Shaped Metal Sample Image
Odd Shaped Metal Sample Image

Using the Google Cloud Vision API, to determine if there were any WEB matches with my odd shaped metal object, the search came up with no results. ¬†In contrast, using Google’s Search Image Engine produced some “similar” web results.

Odd Shaped Metal Sample Image Web Results
Odd Shaped Metal Sample Image Web Results

Finally, I tested the Google Cloud Vision API with a self portrait image.  THIS was so cool.

Google Vision API - Face Attributes
Google Vision API – Face Attributes

The API brought back several image attributes specific to “Faces”. ¬†It attempts to identify certain complex facial attributes, things like emotions, e.g. Joy, and Sorrow.

Google Vision API - Labels
Google Vision API – Labels

The API brought back the “Standard” set of Labels which show how the Classifier identified this image as a “Person”, such as Forehead and Chin.

Google Vision API - Web
Google Vision API – Web

Finally, the Google Cloud Vision API brought back the Web references, things like it identified me as a Project Manager, and an obscure reference to Zurg in my Twitter Bio.

The Google Cloud Vision API, and their own baked in Google Search Image Engine are extremely enticing, but yet have a ways to go in terms of accuracy %. ¬†Of course, ¬†I tried using my face in the Google Search Image Engine, and looking at the “Visually Similar Images” didn’t retrieve any images of me, or even a distant cousin (maybe?)

Google Image Search Engine: Ian Face Image
Google Image Search Engine: Ian Face Image

 

Smartphone AI Digital Assistant Encroaching on the Virtual Receptionist

Businesses already exist which have developed and sell Virtual Receptionist, that handle many caller needs (e.g. call routing).

However, AI Digital Assistants such as Alexa, Cortana, Google Now, and Siri have an opportunity to stretch their capabilities even further. ¬†Leveraging technologies such as¬†Natural language processing (NLP) and¬†Speech recognition (SR), as well as APIs into the Smartphone’s OS answer/calling capabilities, functionality can be expanded to include:

  • Call Screening – ¬†The digital executive assistant asks for the name of the caller, ¬†purpose of the call, and if the matter is “Urgent
    • A generic “purpose” response or a list of caller purpose items can be supplied to the caller, e.g. 1) Schedule an Appointment
    • The smartphone’s user would receive the caller’s name, and the purpose as a message back to the UI from the call, currently in a ‘hold’ state,
    • The smartphone user may decide to accept the call, or reject the call and send the caller to voicemail.
  • Call / Digital Assistant Capabilities
    • The digital executive assistant may schedule a ‘tentative’ appointment within the user’s calendar. ¬†The caller may ask to schedule a meeting, the digital executive assistant would access the user’s calendar to determine availability. ¬†If calendar indicates¬†availability, a ‘tentative’ meeting will be entered. ¬†The smartphone user would have a list of tasks from the assistant, and one of the tasks is to ‘affirm’ availability of the meetings scheduled.
    • Allow recall of ‘generally available’ information. ¬†If a caller would like to know the address of the smartphone user’s office, the Digital Assistant may access a database of generally available information, and provide it. ¬†The Smartphone user may use applications like Google Keep, and any notes tagged with a label “Open Access” may be accessible to any caller.
    • Join the smartphone user’s social network, such as LinkedIn. If the caller knows the phone number of the person but is unable to find the user through the social network directory, an invite may be requested by the caller.
    • Custom business workflows may also be triggered by the smartphone, such as “Pay by Phone”.

Takeaways

The Digital Executive Assistant capabilities:

  • Able to gain control of your Smartphone’s incoming phone calls
  • Able to interact with the 3rd party, dial in caller,¬† on a set of business dialog workflows defined by you, the executive.

Small Business Innovation Research (SBIR) Grants Still Open Thru 2017

Entrepreneurs / Science Guys (and Gals),

Are you ready for a challenge, and 150,000 USD to begin to pursue your challenge?

That’s just SBIR Phase I, Concept Development (~6 months). ¬†The second phase, Prototype Development, may be funded up to 1 MM USD, and last 24 months.

The Small Business Innovation Research (SBIR) program is a highly competitive program that encourages domestic small businesses to engage in Federal Research/Research and Development (R/R&D) that has the potential for commercialization. Through a competitive awards-based program, SBIR enables small businesses to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s R&D arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs.

The program’s goals are four-fold:
  1. Stimulate technological innovation.
  2. Meet Federal research and development needs.
  3. Foster and encourage participation in innovation and entrepreneurship by socially and economically disadvantaged persons.
  4. Increase private-sector commercialization of innovations derived from Federal research and development funding.

For more information on the program, please click here to download the latest SBIR Overview, which should have everything you need to know about the initiative.

Time is quickly running out to 1) Pick one of the Solicitation Topics provided by the US government; and 2) Submit your Proposal

For my query of the SBIR database of topics up for Contracts and Grants:  Phase I; Program = SBIR; Year = 2017

From that query, it produced 18 Contract / Grant opportunities.  Here are a few I thought would be interesting:

PAS-17-022
PAS-17-022
PAR-17-108
PAR-17-108
RFA-ES-17-004
RFA-ES-17-004
RFA-DA-17-010
RFA-DA-17-010

Click Here for the current, complete list of topics by the SBIR.  

 

Autonomous Software Layer for Vehicles through 3rd Party Integrators / Vendors

It seems that car manufacturers, among others, are building autonomous hardware (i.e. vehicle and other sensors) as well as the software to govern their usage.  Few companies are separating the hardware and software layers to explicitly carve out the autonomous software, for example.

Yes, there are benefits to tightly couple the autonomous hardware and software:

1. Proprietary implementations and intellectual property – Implementing autonomous vehicles within a single corporate entity may ‘fast track‚Äô patents, and mitigate NDA challenges / risks

2. Synergies with two (or more) teams working in unison to implement functional goals.  However, this may also be accomplished through two organizations with tightly coupled teams.   Engaged, strong team leadership to help eliminate corp to corp BLOCKERS, must be in place to ensure deliverables.

There are also advantages with two separate organizations, one the software layer, and the other, the vehicle hardware implementation, i.e. sensors

1. Implementation of Autonomous Vehicle Hardware from AI Software enables multiple, strong alternate corporate perspectives These perspectives allow for a stronger, yet balanced approach to implementation.

2.  The AI Software for Autonomous vehicles, if contractually allowed, may work with multiple brand vehicles, implementing similar capabilities.  Vehicles now have capabilities / innovations shared across the car industry.  The AI Software may even become a standard in implementing Autonomous vehicles across the industry.

3. Working with multiple hardware / vehicle manufactures may allow the enablement of Software APIs, layer of implementation abstraction. ¬†These APIs may enable similar approaches to implementation, and reduce redundancy and work can be used as¬†‚Äėthe gold standard‚Äô in the industry.

4. We see commercial adoption of autonomous vehicle features such as ‚ÄúAuto Lane Change‚ÄĚ, and ‚ÄúAutomatic Emergency Braking.‚ÄĚ so it makes sense to adopt standards through¬†3rd Party AI software Integrators / Vendors

5. Incorporating Checks and Balances to instill quality into the product and the process that governs it.

In summation, Car parts are typically not built in one geographic location, but through a global collaboration. ¬†Autonomous software for vehicles should be externalized in order to overcome unbiased safety and security requirements. ¬†A standards organization¬†‚Äúwith teeth‚ÄĚ could orchestrate input from the industry, and collectively devise¬†‚Äúbest practices‚ÄĚ for autonomous vehicles.

Kosher ‘Like’ Certifications and Oversight of Autonomous Vehicle Implementations

Do AI Rules Engines ‚Äúdeliberate‚ÄĚ any differently between rules with moral weight over none at all. Rhetorical..?

The ethics that will explicitly and implicitly be built into implementations of autonomous vehicles involves a full stack of technology, and ‚Äúbusiness‚ÄĚ input. In addition, implementations may vary between manufacturers and countries.

In the world of Kosher Certification, there are several authorities that provide oversight into the process of food preparation and delivery. These authorities have their own seal of approval. In lieu of Kosher authorities, who will be playing the morality, seal of approval, role?  Vehicle Insurance companies?  Car insurance will be rewritten when it comes to autonomous cars.  Some cars may have a  higher deductible or the cost of the policy may rise based upon the autonomous implementation.

Conditions Under Consideration:

1. If the autonomous vehicle is in a position of saving a single life in the vehicle, and killing one or more people outside the vehicle, what will the autonomous vehicle do?

1.1 What happens if the passenger in the autonomous vehicle is a child/minor. Does the rule execution change?

1.2 what if the outside party is a procession, a condensed population of people. Will the decision change?

The more sensors, the more input to the decision process.