Our Mind’s Eye Stream for Sale: Who will Own those Portals?

As we approach a brave new world of our Mind’s Eye for Sale, who will own that portal, or jump page to other view’s or other perspectives?  It gets more and more expensive to see the world, and harder to travel.  Sound like Total Recall, the movie, not far off from the path we’re already on without even realizing.  Portals to other people’s perspectives, such as Instagram, seeing life from other people’s interpretations of the world, it is fascinating and alluring to us.

Once the Genie is out of the bottle, it’s hard to turn back.  In all sincerity, a lightweight version of Google’s Android OS for Glass may even be downloadable, and free as it is based on open source.  The Glass is super stylish, but super expensive.  If you’re in the main stream you can afford the glass, if not, you can build your own.  Not that difficult relatively speaking, a kit from Texas Instruments perhaps, such as we’ve seen in the PC world, where they now offer small computer kits for building small computers with Android, and Linux.  If you wanted to build your own Google Glass, how fast will there be imitations, I imagine, faster than you can blink an eye, pun intended.

Google will make it popular and sexy, after that, there could be a flood of imitations.  After all, today we can all build knock off Google Glass, a tiny web cam, a lightweight OS, and Bluetooth integrated with your smartphone for two way interaction, streaming, and communications.  The lightweight OS could today be Linux, but the champion for this effort , Red Hat? No, they are a support and solutions group for a blend of Unix.   No, there are a few hurdles that Google must and have taken, in some cases, partnered with Verizon, who had their own blend of HUD at the 2013 CES conference.  Today, we might mock and jeer people who wore glasses with a mini cam on their glasses.  It might be clunky, the idea is to make it alluring to the masses, as well as going through iterations to make it an acceptable medium to the public.  Once Google, the trailblazer in this endeavor burns through the problems, it will pave the way for a massive wave of alternate choices, become a commodity.  It’s not just the issue with the UI, there are legal battles to be fought, privacy for example, is it safe to drive with them on, and so on.  There needs to be mainstream platforms, so people take advantage, and are lured to independent platforms.  Many other companies might follow, such as Amazon or other cloud based companies.  Maybe even independent sites, web sites, mobile apps, and others joining and integrating with APIs.

Google Acquisition of Machine Learning Co. has a big impact on your future

Machine learning or AI induction,  proactively learns by correlating data points, and then makes a proactive decision.  Typically, the AI engine needs the data or in this case, web sites, blogs, etc. to have consistent meta data, information that describes the information.  The data is collected & processed. 

Instead of an enforced meta schema across the internet, which is difficult, needs to be enforced by browsers, and a standards body with a large set of Internet stakeholders needs to decide and implement it, e.g. NewsML-G2.

This technology seems to be able to collect Internet assets, parse them, create meta data on the fly, then, where possible, correlate data points, and in the exact format the AI engine needs.

This tech may be used for anything, and I mean, anything or anyone.   A machine learning engine can be fed any subject matter, a database of images, audio, or text from Google Plus Posts, Profiles, Android objects or any Google product,  and then once a schema is in place for the meta data, the process above begins. This AI enging processing is ongoing to keep refining the predictiveness of the AI engine. The process of Induction needs a large data set to be more accurate, or else the AI engine projections may include outlier behaviors. The induction engine needs to be able to filter out the outliers, and use what is within the bell curve of behaviors, thus eliminating false positive trends. Google wants to, at a minimum, project predictes trends, output in Google Plus.

Google may also skew the data by purposely picking items within the bell, but not on top of the bell, the most common range, to project what they want as the trends. E.g. for advertising.

It can even be applied to computer recognized objects in images, perhaps you see a friend once a week, every week on Thursday at or around 3. If you use Google Glass and forget to see someone, your Android might ask you are you going to see Sally today, it is not in your calendar, and she is not in your proximity when you ‘normally’ see that person.

Another case is when images are posted to Google via Glass, once the user publishes the post, AI could analyze clothing, or jewelry objects it ‘sees’, perform induction on every object in Google Plus public or private photos, and predict fashion trends.

Google has a privacy policy that may abstract the user specific data, and is able to then classify users into groups or types of people, then they are able to proactively publish trends before they occur, or are noticed by the human mind. Trends may also be geo specific, which don’t seem to appear yet in G+.

http://www.wired.com/wiredenterprise/2013/04/google-acquires-wavii/

Soft Touch Mouse Brightens Up Your Life

I was looking for a unique mouse on the internet, and it doesn’t seem to exist.  It’s probably something easy to manufacturer, and will brighten up your day for sure.  On the top of the mouse, surrounding the entire body, including the mouse buttons, and except for the base of the mouse is a soft jell layer, soft to the tough, and contains silicone.  Within the silicone are diffused LED lights, which are grouped in several colors, such as Blue, Red, Green, and White, or clear. Under the mouse contains the sensor for the mouse movement as well a micro switch, which has 6 settings.  5 of the settings on the switch corresponds to a color, and when selected, light up the jell with that color.  The 6th setting is custom, which corresponds to the operating system driver for the USB or Bluetooth mouse.  On your computer, there is a light weight application that allows you to select one of several designs or create your own, for example, you could select the pattern which is a rolling wave of lights, or a mixture of colors.  The mouse application allows you to select each LED to turn the color on or off, or you could lasso multiple pixels for a quick design.  The user may create a static image, or create N static pictures to play in a slideshow, dynamic sequence, like the wave of colors I mentioned. The other optimization I thought of is to integrate the lights to listen to your sound output, and act just like an equalizer, such as iTunes, and have the mouse react to the music, either in a random sequence, or flash up and down to the beat of the music, using the setting for custom.

Someone build it please, sounds cool.   If you see something out there like that, please tell me, and happy to post a link to the readers.

Opportunities for Viral Exposure of Amazon and Netflix Unique Content

It is completely unclear to me why Amazon and Netflix have not developed a widget that allows anyone with a blog, or website to incorporate a particular type of video player widget.  Amazon Associates does have static images for their movies so their associates can enable these plugins on their web sites.  There is also the opportunity to earn some revenue with click through as well as if the click through turns into revenue.  Netflix should also adopt a similar model.  It seems they advertise with select partners. 

What neither company has to drive more revenue is a video widget plugin that allows any blogger to embed it in their site, and plays a trailer of video, a 30 second clip, looped.  This video, once clicked, links to a free view of their  unique content video hosted on the vendor’s site.  The viewers are obligated to rate the video if they want to qualify for a discount.  If it’s a streaming video service such as Netflix,  the discount can be applied to a month of service, or if it’s the purchase of a movie on demand, the discount may be applied to another video of unique content.   They should also track back the revenue earned to the blogger site to pay a nominal referral fee.

Hashtags Embedded in Sound Waves of Songs: Watermarks

Sound, or video is able to be embedded with hash tag information like a watermark in the song, and this hash tag information can then be used as a link or other information which can be used as a link to a fan site, e.g first 100 users of purchase get x tickets for the purchase of the song, or backstage passes to the next concert in your area.

Hash tags that are embedded in the sound of a song may be used dynamically, such as the live broadcast of the song, and offering the user to call or text a number to receive a prize, or embedded in the purchase of a song, so when the song plays,the information will be sent generically to a ratings association to get better quality of a Top 40 ratings. The hash tags may also be used for dynamic posting of the song to your social network of preference, e.g. #NP, now playing., optionally, as specified by the user.

Additionally, if the consumer can automatically post the unique code of the song as a hash tag in a special music service, maybe variations of their favorite song may exist, e.g. the acoustic, or concert version of the song instead of the studio version, and through the posting of this hash tag into this special music social network, fans may be able to identify and acquire unique recordings of their favorite songs.

There is the ability to embedded the watermark of the artist images in the song, such as the JPEGs of the artists signatures, concert pictures, or any other special information, Watermark signals in audio and video is not new, but typically used in an encrypted manner to ensure the artist, music distribution and production company ensure their appropriate share in the profit margin.  Used in this manner, embedded information in the sound waves allows the users to access addition fan content.

[dfads params=’groups=1177,1178&limit=1&orderby=random’]

 

Google Apps Competes with Nvidia in the Game as a Service Market

I first saw Nvidia’s new GaaS offering at CES 2013.  It has tremendous for game developers as well as players alike.  I then had a looking at the Google Apps Marketplace, and there seemed to be a hole in their Product offerings, no gaming, which is a huge market.  At the moment, it seems geared toward business and education   Many of these applications can integrate into the Google Plus environment, such as Google Plus hangouts, amazing multiple user, technology platforms.

The integration of games seems like a logical step.  If the top installs list has the first product containing ~600 reviews, we know it is a relatively new platform. Also, from the trade papers, I understand Google designs it’s own servers with a lot of mystery around the proprietary technology of their data center server technologies.  One difference, between Nvidia and Google, although, if the technology output, resolution and speed of the games for the players, and the simplicity of the API, or programmer access to the high performance hardware is transparent, then both offerings may be competitive.  Time will tell.

Nvidia web site definition of GaaS:

NVIDIA GRID is the foundation for the ideal on-demand gaming as a service (GaaS), providing tremendous advantages over traditional console gaming systems.

  • Any-device gaming: High-quality, low-latency, multi device gaming on any PC, Mac, tablet, smartphone or TV.

  • Click-to-play simplicity: Anytime access to a library of gaming titles and saved games in the cloud. Play or continue games from any device, anywhere.

  • Less hassle: No new hardware. No complicated setup. No game discs. No digital downloads. No game installations. No game patches.

Google Glass Sportswear Improves Soccer Scoring Percentages

A Digital Eye to Watch Soccer’s Trouble Spots

This article in the New York Times had me thinking about Soccer and Cameras.  I, then progressed onto object identification with cameras, as per my previous post, and then thought about Google Glass.  If one were to fit Google Glass on sports sun glasses so it would be protected and affixed as someone played sports, that would be the first step to allow a soccer player to predict the movement of a goalie opponent, and score.

Barriers: motion capture and focus of each frame as the player moved through the field, as the soccer scorer approached the goal.  The soccer player taxes a ‘moments’ pause, if at all, and kicks, head butts, whatever, to get the score in the goal.  The idea is to be where the soccer player is not.  So many variables to deal with by both the goalie, and the opposing team. From the mind of the goal keeper, where are his defense men, where is the opposing force, team advancement and positioning, and so on.  However, just as American teams watch old footage of the opposing force, their opposition moves, as well as there defensive weaknesses, so to, can soccer players take advantage, and practice, in real time, with Google Sportswear glasses.  These Google Sportswear may take hours and hours of footage, analyze footage, and induce possible defensive moves that a goal keeper might make against a scorer given field positions.  There are many variables, however, using object, facial recognition, historical game footage, and these glasses, may make for real-time suggestions to a soccer player for a practice session.  I am not suggesting soccer would allow these enhancements during game play, but in practice, they may attune a players skills, just like flight simulators for a pilot.

In addition, with the footage incorporated in the future with the above technology, this technology may improve to be a very useful tool.

Fujitsu Technology Turns Paper Into a Touchscreen

Fujitsu Develops Technology That Turns Paper Into a Touchscreen

This article is good, but the video courtesy of YouTube/DigInfo is extremely telling, and seems leaps and bounds for the business commercial sector as an advancement in interactive input technology.  This compact projection and cam input device is bold in its accomplishments for the consumer market.  The video shows capture of images, and text by selecting the object with your finger, as well as projecting objects on a flat surface, but perceived what looks like in three dimensions: height, width and depth. The user has the ability to move an object left right forward and back with his finger.  It’s a bit big, would be my only criticism, however, right sized for the first implementation.  There are technologies, which already exist, that allow the projection and interaction of an object in three dimensions, but are still flat interactively, and project vertically, such as an object I mentioned I saw at CES 2013 I mentioned in a previous post.  The goal of an input and projection technology of this nature would allow holographic, or circular projection and manipulation of a device.  It sounds simple enough to expand upon this technology.  Create a thin frame cube, which can be expanded and retracted.  Each angle, intersection point contains one of these similar low resolution cams, just as displayed in the Fujitsu  video.  Capturing and projecting holographic technology, is not a new technology, but has yet to be commercialized.  Inches and goal.