Get me on a flight to Japan!
As a step to pacify all of the mocking around Google Glass, the current Governor of California,Jerry Brown, in conjunction with Arnold Schwarzenegger, as a gag to allude to the Terminator movies, will announce later in the year that a motor unit, or police motorcycle, will use Google Glass with plate and face recognition systems to help them identify and if necessary detain suspects for outstanding warrants. The specific city identified for this program has yet to be specified.
It is rumored that Google will offer a partner program with incentives that allow a user to look at an object, recognize the object, and overlay or mock up the object. This opens up amazing possibilities for partners, pushes sales for Google Glass, and it’s partners.
As an example, if a user walks up to a Redbox and looks at the device, one of two possibilities are true, if the third party vendor is not a partner, Google will provide an advertisement for competitive partner offerings like it’s own service. If breakfast cereals partner with Google Glass, as you walk down the isle in your local market, you might see an overlay on some of the boxes, a piece to an invisible puzzle, if you buy all the boxes,and send the proof of purchase and the puzzle pieces, then you are entered in a contest to win ….
The overlay or mock up of a real world object with a virtual object has limitless potential, all types of retail objects, even people walking on the street using Google Glass may see a tee shirt differently than you because there is a special print image and message that associates with the shirt. Advertising, contests, promotions, and goofy tee-shirts are just the beginning.
Reading two unrelated posts in the Times, thought about all the challenges we’ll have with Glass or other consumer Heads up Displays (HUDs) regarding privacy, theft and other related crimes.
If someone is at the ATM behind you with Google Glass, can hand gestures be recorded? How about tones of lock combinations? What about a glimpse of a physical key to a lock, record it, and printed to 3D printer? Glass application to focus on the ‘targets’ eyeball, the HUD camera focuses, then locks onto the eye, captures, then performs a reverse projection onto a blank contact. How about heat and/or UV imaging HUD lens views a smartphone, or touchscreen computer’s glass for finger prints for password breadcrumbs. The list goes on and on.
Future tech, partially, and not.
As we approach a brave new world of our Mind’s Eye for Sale, who will own that portal, or jump page to other view’s or other perspectives? It gets more and more expensive to see the world, and harder to travel. Sound like Total Recall, the movie, not far off from the path we’re already on without even realizing. Portals to other people’s perspectives, such as Instagram, seeing life from other people’s interpretations of the world, it is fascinating and alluring to us.
Once the Genie is out of the bottle, it’s hard to turn back. In all sincerity, a lightweight version of Google’s Android OS for Glass may even be downloadable, and free as it is based on open source. The Glass is super stylish, but super expensive. If you’re in the main stream you can afford the glass, if not, you can build your own. Not that difficult relatively speaking, a kit from Texas Instruments perhaps, such as we’ve seen in the PC world, where they now offer small computer kits for building small computers with Android, and Linux. If you wanted to build your own Google Glass, how fast will there be imitations, I imagine, faster than you can blink an eye, pun intended.
Google will make it popular and sexy, after that, there could be a flood of imitations. After all, today we can all build knock off Google Glass, a tiny web cam, a lightweight OS, and Bluetooth integrated with your smartphone for two way interaction, streaming, and communications. The lightweight OS could today be Linux, but the champion for this effort , Red Hat? No, they are a support and solutions group for a blend of Unix. No, there are a few hurdles that Google must and have taken, in some cases, partnered with Verizon, who had their own blend of HUD at the 2013 CES conference. Today, we might mock and jeer people who wore glasses with a mini cam on their glasses. It might be clunky, the idea is to make it alluring to the masses, as well as going through iterations to make it an acceptable medium to the public. Once Google, the trailblazer in this endeavor burns through the problems, it will pave the way for a massive wave of alternate choices, become a commodity. It’s not just the issue with the UI, there are legal battles to be fought, privacy for example, is it safe to drive with them on, and so on. There needs to be mainstream platforms, so people take advantage, and are lured to independent platforms. Many other companies might follow, such as Amazon or other cloud based companies. Maybe even independent sites, web sites, mobile apps, and others joining and integrating with APIs.
Machine learning or AI induction, proactively learns by correlating data points, and then makes a proactive decision. Typically, the AI engine needs the data or in this case, web sites, blogs, etc. to have consistent meta data, information that describes the information. The data is collected & processed.
Instead of an enforced meta schema across the internet, which is difficult, needs to be enforced by browsers, and a standards body with a large set of Internet stakeholders needs to decide and implement it, e.g. NewsML-G2.
This technology seems to be able to collect Internet assets, parse them, create meta data on the fly, then, where possible, correlate data points, and in the exact format the AI engine needs.
This tech may be used for anything, and I mean, anything or anyone. A machine learning engine can be fed any subject matter, a database of images, audio, or text from Google Plus Posts, Profiles, Android objects or any Google product, and then once a schema is in place for the meta data, the process above begins. This AI enging processing is ongoing to keep refining the predictiveness of the AI engine. The process of Induction needs a large data set to be more accurate, or else the AI engine projections may include outlier behaviors. The induction engine needs to be able to filter out the outliers, and use what is within the bell curve of behaviors, thus eliminating false positive trends. Google wants to, at a minimum, project predictes trends, output in Google Plus.
Google may also skew the data by purposely picking items within the bell, but not on top of the bell, the most common range, to project what they want as the trends. E.g. for advertising.
It can even be applied to computer recognized objects in images, perhaps you see a friend once a week, every week on Thursday at or around 3. If you use Google Glass and forget to see someone, your Android might ask you are you going to see Sally today, it is not in your calendar, and she is not in your proximity when you ‘normally’ see that person.
Another case is when images are posted to Google via Glass, once the user publishes the post, AI could analyze clothing, or jewelry objects it ‘sees’, perform induction on every object in Google Plus public or private photos, and predict fashion trends.
The article, although off topic, sparked an excellent experience in interactive connunications, requiring people to build to a specification using a phone and the use of Legos. I performed a similar exercise in two executive classes. We needed to describe how to build a small set of lego instructions by describing each step only. Showings the instruction picture was not an option. It was a test in how people are able to best describe and follow instructions, an exercise in communications for both sides. In these exercises, they were both native English speakers, and almost all adhoc teams were not able to complete the objects as the final project specified.
This communications tool can be used as an icebreaker, with established teams to see how well they work together as well as adhoc teams. Of course, the exercise has condensed time requirements, and may be performed in either same or mixed language teams.
In today’s world of 3D model printing specifics, the human interactions seem to be less important. We are removing the human factor of communications which may result in the diminished ability for cross culture interactions and problem solving resolutions, sociologically speaking. Ultimately, this will break down our ability to interact, and we as a species, may become more xenophobic. Drastic, maybe. Is may be an opportunity for governmentk leaders, and world peace. Maybe at the next United Nations summit? 🙂
This may also be an opportunity for AI learning language engines for inductive predictive communications and a project to enhanceme Google Glass language translation as per previous posts
I go into classrooms at my children’s schools, and they have amazing computer labs, magic boards, and so on. What I don’t find is a simple program I used to understand the fundamentals of existence, simply by using spacial coordinates using a little turtle, and giving it spacial coordinates on a grid on a screen to move a turtle. Sounds simplistic to implement, and ramifications minor, but it essentially shaped my mind in the world of logic and objects in coordinates in a spacial grid. It was cute, easy pilot program for first graders in elementary school in a neighborhood in NYC. Looking back, the learning theory constructivism. I admit that constructivism is a bit more complex, which goes into “ways that people create meaning of the world through a series of individual constructs.” Reflecting back on my education and hobbies, I advanced in computers at a very early age, when computers were not generally available, and I believe that an application that can be written now by anyone in a day, if applied early enough in a child’s early development may lay the foundations for logical spacial existence, an object that moves through spacial coordinates with a cute little turtle, as my daughter wants a bunny rabbit for a pet, we’ll replace the turtle with a bunny for one of my daughters.
Now with the advancement of several companies adding computer glasses to their gadgets, one of the applications, possibly a pilot program, may allow the youths of the next generation of first graders can be even more advanced then I, and use these glasses to move their turtles, or rabbits, in 3 dimensions of space instead of just two. The trick was not just to physically move the object in space using your hand, which you may do with these glasses, but you must say the x,y, and z coordinates and plus or minus 1, 2, or 3 spaces. The object won’t budge, the student must say the correct spacial coordinates and moves to get the cute rabbit or turtle, to move to the objective location. This would lay the foundation to a child’s understanding of a construct in N dimensional space. I don’t know if these initial glasses will be tough enough to stand the brute force of a first grader, but it would be absolutely necessary to introduce this type of construct recognition at an early age, so it prepares them for more advanced topics and ultimately their place in the universe stretching all the way to college and their understanding of Philosophy and Existentialism. Go Google, and Microsoft, and we may have a generation that can compete globally once again.
If you have Google Project Glass / Glasses with WiFi to WiFi device connectivity to a smartphone with a pair of head phones, or Glasses, and use Bluetooth, you can have local Language dialect and gesture translation instantly to your ears. If you are looking at someone and they are articulating in anyway, either by signing, using local gestures, or are speaking in any dialect, an instant, fluent translation program reads and understands the real-time video frames per second (FPS) or Frame Rate, either using 50/60/120 FPS, then applies object recognition to read lips, or human movement, then plays a voice in your own local language dialect in your head phones or Bluetooth.
Travel the world and experience the cultures truly as the locals do, empathize, or use them in the workplace and truly eliminate discrimination against the deaf.
Object recognition may need to be applied to each video frame or sampling of the video stream from the real-time video feed of the Google Glass / Glasses.
Also, managers who employ people who are deaf may apply for a tax deduction or even get the glasses for free with a tax write off for their company. In part, thank the people that produced Mr. Holland’s Opus. Watched the movie last night. Great movie! In part, thanks to my mom who is somewhere in the middle east on a cruise
Also, I can see a new wave of popular kids coming up with gestures, and instantly recognized using the Glasses. Sorry kids. Supposedly, these glasses will ship to Android Google Developers for the cost of $1,500 USD by the end of the year; however, that is just rumor, and if you know Java, an relatively easy programming language to pick up, the Android OS Java extensions are relatively easy as well. A Small price to pay for a huge market, and maybe even tax deduction for a small business under Research and Development costs. See your local small business government affairs office for more details.
Although it may have seemed obvious to anyone who has seen the Goggles app, heard about the Google Glass project would have probably used their imagination, but if you have tens of thousands of Google Glasses in the marketplace, the Google Goggles app should be able to identify clothes, and accessories, and determine from a fashion standpoint, what is really ‘trending’. If it wasn’t obvious before, in the article, With Glass, Google Gives a Fashion Icon a New Toy, it says Von Furstenburg (or DVF, for short) used them to document her days in the lead up to New York Fashion Week. Is Twitter opening up an office in NYC part of the beginning moves to be acquired by Google?