Artificial Intelligence: Tuning with a Content Index And Predictive Models

As I was reading an All Things Digital article, Artificial-Intelligence Professor Makes a Search App to Outsmart Siri, there was a statement that made:

“We memorize the dictionary to read the Library of Congress,” he said. “Siri is trying to memorize the Library of Congress.”

 

A tool more commonly used in the past, in books, at the end of a book, an index of where the words appeared in the book was noted with page numbers.  Classic rules engines is ‘data in a black box’, searchable within the context they appear. The more put into the black box, users can search on ‘rules’ or content, and in precedence, an action occurs, or the content that is searched appears.  If there are cross-references with an associated category or tag with ‘each line of data’ or ‘rule’ that will enable the Artificial Intelligence engine to be more efficient.  Therefore the correlations of ‘data to other data’ with similar or like tags enables an Artificial Intelligence will be more intelligent.  In theory, categorized or tagged content indexed to references of the data points should fine tune the engine.

An addition theory, allows for predictive models to produce refined searches, or rules.  You can make a predictive model, where the intelligence of the user actually refines the engine. The user can ask a question, and as they refine their question, a predictive model,  may allow for refined user output.  If a user is allowed to participate and tag data search output, the search output could be more granular, like a refined Business Intelligence drill down.  The output of a search, for example, can contain a title, brief summary, and tags that can be added or removed (by the user), which allows for a more robust search, and predictive model; however, you are relying on the user to a) not be malicious, and b) have understanding of what information he is search for within the data.  If web crawlers, or if the webmaster submits URLs with tags, the meta data tags of the page, the black box or Artificial Intelligence rules engine will, if properly submitted, or indexed, correlate the data.  To most people, this is AI or Search Engine 101.  Some people cheat, and add pages with false meta data tags because they want their site to appear in a higher order, or precedence and they may make more revenue with advertising dollars.

There are multiple ways around trying to cheat an Artificial Intelligence Content Index:

  • Hit Ratio: People searching on the same question over and over increase the ‘score’ ratio, thus pushing the false results downward on the list, or removing them entirely.
  • Enlist ‘quality’ users, who are known quantities, such as like Twitter ‘certifies’ certain users.  You may apply for ‘relatively’ unbiased, certification status, such as people who have reputations and certifications in the field, are qualified to ‘enhance’ tags, and improve upon your result outputs. e.g Professors, Statisticians,
  • Enlist users who will actually derive revenue, if their ‘hit ratio’ score delta increases exponentially some N number.  These tags are classified as unverified, however, the people are monetarily motivated to increase peoples’ probably of success to find what people are looking for when other users search the tags become qualified as the results of the tags attract users to their content.  If they are using, let’s say, a browser, which the search engine company owns, such as Chrome, a little plug in can appear and say, was this what you were looking to find, yes or no.

Leave a Reply