Machine learning or AI induction, proactively learns by correlating data points, and then makes a proactive decision. Typically, the AI engine needs the data or in this case, web sites, blogs, etc. to have consistent meta data, information that describes the information. The data is collected & processed.
Instead of an enforced meta schema across the internet, which is difficult, needs to be enforced by browsers, and a standards body with a large set of Internet stakeholders needs to decide and implement it, e.g. NewsML-G2.
This technology seems to be able to collect Internet assets, parse them, create meta data on the fly, then, where possible, correlate data points, and in the exact format the AI engine needs.
This tech may be used for anything, and I mean, anything or anyone. A machine learning engine can be fed any subject matter, a database of images, audio, or text from Google Plus Posts, Profiles, Android objects or any Google product, and then once a schema is in place for the meta data, the process above begins. This AI enging processing is ongoing to keep refining the predictiveness of the AI engine. The process of Induction needs a large data set to be more accurate, or else the AI engine projections may include outlier behaviors. The induction engine needs to be able to filter out the outliers, and use what is within the bell curve of behaviors, thus eliminating false positive trends. Google wants to, at a minimum, project predictes trends, output in Google Plus.
Google may also skew the data by purposely picking items within the bell, but not on top of the bell, the most common range, to project what they want as the trends. E.g. for advertising.
It can even be applied to computer recognized objects in images, perhaps you see a friend once a week, every week on Thursday at or around 3. If you use Google Glass and forget to see someone, your Android might ask you are you going to see Sally today, it is not in your calendar, and she is not in your proximity when you ‘normally’ see that person.
Another case is when images are posted to Google via Glass, once the user publishes the post, AI could analyze clothing, or jewelry objects it ‘sees’, perform induction on every object in Google Plus public or private photos, and predict fashion trends.
Google has a privacy policy that may abstract the user specific data, and is able to then classify users into groups or types of people, then they are able to proactively publish trends before they occur, or are noticed by the human mind. Trends may also be geo specific, which don’t seem to appear yet in G+.
http://www.wired.com/wiredenterprise/2013/04/google-acquires-wavii/