Category Archives: Fun

Netflix Is Testing A Way To Limit Password Sharing

r Netflix is testing a way it can limit password sharing, in what could signal a notable shift of the streaming giant’s posture toward users.“Is this your account?” an on-screen notification asks some of those trying to log on with credentials from someone outside their household, according to users’ screenshots. “If you don’t live with the owner of this account, you need your own account to keep watching.”Users can then enter their own information and create an account, which comes with a 30-day free trial in certain territories.“This test is designed to help ensure that people using Netflix accounts are authorized to do so,” a company spokesperson said in a statement.

Source: Netflix Is Testing A Way To Limit Password Sharing – Deadline

Two Factor Authentication verse Location-Based 

This measure is an ineffective approach at best, and a hindrance, worst-case scenario to those valid Netflix users who travel often and take their streaming service on the road.  Many other Internet Services, beyond content streaming,  are now implementing a 2-Factor Authentication (2-FA) approach.  With 2-FA, a user will log into the Netflix app, and then is sent an email or text message with an authentication code.  The code is then used to complete the login of the Software as a Service (SaaS).  This approach could be extended to VOD  streaming services, and for each account “Profile”,  there is a defined mobile number and email address where the access code can be sent.   Only the default account profile can unlock the security details for profiles, allowing the assignment of mobile numbers and email addresses.

How Will Consumers React?

The initial pilot solution seems like a half measure at the moment. I’m not familiar with how they will implement the location-based, “Outside Your Household” solution because of a legitimate use case where some people who have subscriptions actively travel, for example. Surely, these people who travel will appear to be in various locations, according to network topology. On the other side, if you apply a multifactor authentication approach, that’s bound to be more successful in inhibiting the “password sharing” issue. Netflix defines/reevaluates a maximum number of user-profiles per account. Will this help generate more revenue for the “fledgling” streaming service, or anger their audience who may take flight to one of the many other services offered. It’s not the cheapest streaming service in town. Let’s see.

Politics around Privacy: Implementing Facial and Object Recognition

This Article is Not…

about deconstructing existing functionality of entire Photo Archive and Sharing platforms.

It is…

to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.

Backstory: The Asks / Needs

Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive,  Google Photos,  or iCloud.  A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat,  and/or Twitter).

Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months.  The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids.  In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.

Social Networking Platform Facebook

As far as I can remember the Social Networking platform Facebook has had the ability to tag faces in photos uploaded to the platform.  There are restrictions, such as whom you can tag from the privacy side, but the capability still exists. The Facebook platform also automatically identifies faces within photos, i.e. places a box around faces in a photo to make the person tagging capability easier.  So, in essence, there is an “intelligent capability” to identify faces in a photo.  It seems like the Facebook platform allows you to see “Photos of You”,  but what seems to be missing is to search for all photos of Fred Smith, a friend of yours, even if all his photos are public.    By design, it sounds fit for the purpose of the networking platform.

Auto Curation

  1. Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
  2. The Auto Curation process scans photos for:
    1. “Commonly Identifiable Objects”, such as #Car, #Clock,  #Fireworks, and #People
    2. Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
    3. Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn”  faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
  3. Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.

Manual Curation

After upload,  and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags.  Using the photo curator function on the Cloud Storage Service can tag any “objects” in the photo using Rectangle or Lasso Select.

Curation to Take Action

Once photo libraries are curated, the library owner(s) can:

  • Automatically build albums based one or more #tags
  • Smart Albums automatically update, e.g.  after ingestion and Auto Curation.  Albums are tag sensitive and update with new pics that contain certain people or objects.  The user/ librarian may dictate logic for tags.

Where is this Functionality??

Why are may major companies not implementing facial (and object) recognition?  Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.

Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly?  Do privacy concerns at the moment, leave others to become trailblazers in this area?

Amazon’s Alexa vs. Google’s Assistant: Same Questions, Different Answers

Excellent article by  .

Amazon’s Echo and Google’s Home are the two most compelling products in the new smart-speaker market. It’s a fascinating space to watch, for it is of substantial strategic importance to both companies as well as several more that will enter the fray soon. Why is this? Whatever device you outfit your home with will influence many downstream purchasing decisions, from automation hardware to digital media and even to where you order dog food. Because of this strategic importance, the leading players are investing vast amounts of money to make their product the market leader.

These devices have a broad range of functionality, most of which is not discussed in this article. As such, it is a review not of the devices overall, but rather simply their function as answer engines. You can, on a whim, ask them almost any question and they will try to answer it. I have both devices on my desk, and almost immediately I noticed something very puzzling: They often give different answers to the same questions. Not opinion questions, you understand, but factual questions, the kinds of things you would expect them to be in full agreement on, such as the number of seconds in a year.

How can this be? Assuming they correctly understand the words in the question, how can they give different answers to the same straightforward questions? Upon inspection, it turns out there are ten reasons, each of which reveals an inherent limitation of artificial intelligence as we currently know it…


Addendum to the Article:

As someone who has worked with Artificial Intelligence in some shape or form for the last 20 years, I’d like to throw in my commentary on the article.

  1. Human Utterances and their Correlation to Goal / Intent Recognition.  There are innumerable ways to ask for something you want.  The ‘ask’ is a ‘human utterance’ which should trigger the ‘goal / intent’ of what knowledge the person is requesting.  AI Chat Bots, digital agents, have a table of these utterances which all roll up to a single goal.  Hundreds of utterances may be supplied per goal.  In fact, Amazon has a service, Mechanical Turk, the Artificial Artificial Intelligence, which you may “Ask workers to complete HITs – Human Intelligence Tasks – and get results using Mechanical Turk”.   They boast access to a global, on-demand, 24 x 7 workforce to get thousands of HITs completed in minutes.  There are also ways in which the AI Digital Agent may ‘rephrase’ what the AI considers utterances that are closely related.  Companies like IBM look toward human recognition, accuracy of comprehension as 95% of the words in a given conversation.  On March 7, IBM announced it had become the first to hone in on that benchmark, having achieved a 5.5% error rate.
  2. Algorithmic ‘weighted’ Selection verses Curated Content.   It makes sense based on how these two companies ‘grew up’, that Amazon relies on their curated content acquisitions such as Evi,  a technology company which specialises in knowledge base and semantic search engine software. Its first product was an answer engine that aimed to directly answer questions on any subject posed in plain English text, which is accomplished using a database of discrete facts.   “Google, on the other hand, pulls many of its answers straight from the web. In fact, you know how sometimes you do a search in Google and the answer comes up in snippet form at the top of the results? Well, often Google Assistant simply reads those answers.”  Truncated answers equate to incorrect answers.
  3. Instead of a direct Q&A style approach, where a human utterance, question, triggers an intent/goal [answer], a process by which ‘clarifying questions‘ maybe asked by the AI digital agent.  A dialog workflow may disambiguate the goal by narrowing down what the user is looking for.  This disambiguation process is a part of common technique in human interaction, and is represented in a workflow diagram with logic decision paths. It seems this technique may require human guidance, and prone to bias, error and additional overhead for content curation.
  4. Who are the content curators for knowledge, providing ‘factual’ answers, and/or opinions?  Are curators ‘self proclaimed’ Subject Matter Experts (SMEs), people entitled with degrees in History?  or IT / business analysts making the content decisions?
  5. Questions requesting opinionated information may vary greatly between AI platform, and between questions within the same AI knowledge base.  Opinions may offend, be intentionally biased, sour the AI / human experience.

2016 Olympics Rating are Down? Don’t Blame Streaming!

The 2016 Olympic opening ceremonies had just started, and I thought briefly about events I wanted to see.  I’m not a huge fan of the Olympics mostly because of the time commitment.  However, if I happen to be in front of the TV when the events are on, depending upon the event, I’m happy to watch, and can get drawn in easily.

As the Olympics unfolded, I caught a few minutes of an event here and there, just by happening to be in front of a TV.  Searching for any particular event never crossed my mind, even with the ease and power behind several powerful search engines like Bing and Google. The widgets built into search engine’s results showing Olympic standings in line with other search results was a great time saver.

However, why oh why didn’t the broadcasting network NBC create a calendar of Olympic 2016  events that can easily be imported into either Google Calendar, or Microsoft Outlook?  Even Star Trek fans are able to add a calendar to their Google Calendar for Star Dates.

Olympic ratings are hurting?  Any one of these organizations could have created a shared calendar for all or a subset of Olympic  events. Maybe you just want a calendar that shows all the aquatic events?

Olympic Team Sponsors from soda to fast food, why oh why did you paint your consumer goods with pictures of Javelin throwers and Swimmers, but didn’t put a QR code on the side of your containers that directs consumers to your sponsored team’s calendar schedule “importable” into Google Calendar, or Microsoft Outlook?

If sponsors, or the broadcasting network, NBC, would have created these shareable calendars, you now would had entered the personal calendars of the consumer.  A calendar entry pop-up may not only display what current competition is being fought, the body of the event may also contain [URL] links to stream the event live.  The body of the event may also contain links to each team player’s stats, and other interesting facts relating to the event.

Also, if a Team Sponsor is the one creating the custom calendar for the Olympic Events, like USA Swimming’s sponsor Marriott , the streaming live video events may now be controlled by the Sponsor, yes, all advertising during the streaming session would be controlled by the the Sponsor.  All Marriott!  The links in the team sponsor calendar entries may not only have their own streaming links to the live events, but include any feature rich, relevant related content.

There is the small matter of broadcast licensing Olympic Broadcasting Services (OBS)  and broadcaster exclusivity, but hey, everything is negotiable.  Not sure traditional broadcasting rules should apply in a world of video streaming.

All the millions sponsors spend, for an IT Project that could cost a fraction of their advertising budget, and add significant ROI, it boggles the mind why every sponsor isn’t out there doing this or something similar right now.  The tech is relatively inexpensive, and readily available, so why not now?  If you know of any implementations, please drop me a note.

One noted exception, the “Google app” [for the iPhone] leverages alerts for all types of things such as a warning on traffic conditions for your ride home to … the start of the Women’s beam Gymnastics Olympic event.   Select the alert, and opens up a ‘micro’ portal with people competing in the event, a detailed list of athlete profiles, including picture, country of origin, and metals won.  There is also a tab showing the event future schedule.

The Race Is On to Control Artificial Intelligence, and Tech’s Future

Amazon, Google, IBM and Microsoft are using high salaries and games pitting humans against computers to try to claim the standard on which all companies will build their A.I. technology.

In this fight — no doubt in its early stages — the big tech companies are engaged in tit-for-tat publicity stunts, circling the same start-ups that could provide the technology pieces they are missing and, perhaps most important, trying to hire the same brains.

For years, tech companies have used man-versus-machine competitions to show they are making progress on A.I. In 1997, an IBM computer beat the chess champion Garry Kasparov. Five years ago, IBM went even further when its Watson system won a three-day match on the television trivia show “Jeopardy!” Today, Watson is the centerpiece of IBM’s A.I. efforts.

Today, only about 1 percent of all software apps have A.I. features, IDC estimates. By 2018, IDC predicts, at least 50 percent of developers will include A.I. features in what they create.

Source: The Race Is On to Control Artificial Intelligence, and Tech’s Future – The New York Times

The next “tit-for-tat” publicity stunt should most definitely be a battle with robots, exactly like BattleBots, except…

  1. Use A.I. to consume vast amounts of video footage from previous bot battles, while identifying key elements of bot design that gave a bot the ‘upper hand’.  From a human cognition perspective, this exercise may be subjective. The BattleBot scoring process can play a factor in 1) conceiving designs, and 2) defining ‘rules’ of engagement.
  2. Use A.I. to produce BattleBot designs for humans to assemble.
  3. Autonomous battles, bot on bot, based on Artificial Intelligence battle ‘rules’ acquired from the input and analysis of video footage.

Aerial Photography Communities Aligned by Interest, Broadcast in Realtime

Although I fail to see the excitement and mass appeal of aerial drone use, the hobby has taken off on the tail end of military UAV.  Just like the stationary 24/7 webcams, and web sites that catalog these cams, the drone networks, or communities may spawn entirely new interest groups.

Do you have a drone with the ability to stream video in realtime?  You may drive a following to your stream based upon a multitude of reasons, e.g. location; subject(s) of focus.  Once airborne, your drone may broadcast to a web site that tracks your drone’s latitude and longitude, as well as dynamically tagging the feed with relevant frame data.  Object recognition may scan each frame, or a sampling for ‘objects of interest’.  Objects of interest may appear to a community of enthusiasts as a ‘tag cloud’.  Users may select a tag, and drill down to a list of active feeds.  Alternatively, users may bring up a map view to show the active drones flights.  The drones may also show ‘bread crumbs’ of a flight, maybe the last 1/2 hour,  the buffered video available.  Could be just an extension of YouTube, or a new platform designed entirely around Drone Realtime Streaming.

Google Introduces their Cloud, Digital Asset Management (DAM) solution

Although this is a saturated space, with many products, some highly recommended, I thought this idea might interest those involved in the Digital Asset Management space.  Based on the maturity of existing products, and cost, it’s up to you, build or buy.  The following may provide an opportunity for augmenting existing Google products, and overlaying a custom solution.

Google products can be integrated across their suite of solutions and may produce a cloud based, secure, Digital Asset Management, DAM solution.   In this use case, the digital assets are Media (e.g. videos, still images)

A Google DAM may be created by leveraging existing features of Google Plus, Google Drive, YouTube, and other Google products, as well as building / extending additional functionality, e.g. Google Plus API, to create a DAM solution.   An over arching custom framework weaves these products together to act as the DAM.

Google Digital Asset Management (New)

  1. A dashboard for Digital Asset Management should be created, which articulates, at a glance, where project media assets are in their life cycle, e.g. ingestion, transcoding, editing media, adding meta data, inclusion / editing of closed captions, workflow approvals, etc.
  2. Creation and maintenance of project asset folder structure within storage such as Google Drive for active projects as well as Google Cloud Storage for archived content.  Ingested content to arrive in the project folders.
  3. Ability to use [Google YouTube] default encoding / transcoding functionality, or optionally leverage alternate cloud accessible transcoding solutions.
  4. A basic DAM UI may provide user interaction with the project and asset meta data.
  5. Components of the DAM should allow plug in integration with other components on the  market today, such as an ingestion solution.

Google Drive and Google Cloud Storage.  Cloud storage offers large quantities of storage e.g. for Media (video, audio), economically.

  1. Google Drive ingestion of assets may occur through an automated process, such as a drop folder within an FTP site.  The folder may be polled every N seconds by the Google DAM orchestration, or other 3rd party orchestration product, and ingested into Google Drive.  The ingested files are placed into a project folder designated by the accompanying XML meta file.
  2. The version control of assets, implemented by Google Drive and the DAM to facilitate collaboration and approval.
  3. Distribution and publishing media to designated people and locations, such as to social media channels, may be automatically triggered by DAM orchestration polling Google Drive custom meta data changes.   On demand publishing is also achievable through the DAM.
  4. Archiving project assets to custom locations, such as Google Cloud solution, may be triggered by a project meta data status modification, or on demand through the DAM.
  5. Assets may be spawned into other assets, such as clips.  Derived child assets are correlated with the master, or parent asset within the DAM asset meta data to trace back to origin.  Eliminates redundancy of asset, enabling users to easily find related files and reuse all or a portion of the asset.

Google Docs

  1. Documents required to accompany each media project, such as production guidelines, may go through several iterations before they are complete.  Many of the components of a document may be static.  Google Docs may incorporate ‘Document Assembly’ technology for automation of document construction.

Google’s YouTube

  1. Editing media either using default YouTube functionality, or using third party software, e.g. Adobe suite
  2. Enable caption creation and editing  may use YouTube or third party software.
  3. The addition & modification of meta data according to the corporate taxonomy may be added or modified through [custom] YouTube fields, or directly through the Google DAM Db where the project data resides.

Google’s Google Plus +

  1. G+ project page may be used for project and asset collaboration
  2. Project team members may subscribe to the project page to receive notifications on changes, such as new sub clips
  3. Asset workflow notifications,  human and automated:
    1. Asset modification approvals (i.e. G+ API <- -> DAM Db) through custom fields in G + page
    2. Changes to assets (i.e. collaboration) notifications,
    3. [Automated] e.g. ingestion in progress, or completed updates.
    4. [Automated] Process notifications: e.g. ‘distribution to XYZ’ and ‘transcoding N workflow’.  G + may include links to assets.
  4. Google Plus for in-house, and outside org. team(s) collaboration
  5. G + UI may trigger actions, such as ingestion e.g.  by specifying a specific Google Drive link, and a configured workflow.

Google Custom Search

  1. Allows for the search of assets within a project, within all projects within a silo of business, and across entire organization of assets.
  2. Ability to find and share DAM motion pictures, still images, and text assets with individuals, groups, project teams in or outside the organization.  Google Plus to facilitate sharing.
  3. Asset meta data will e.g. describe how the assets may be used for distribution, digital distribution rights.   Users and groups are implemented within G+, control of asset distribution may be implemented in Google Plus, and/or custom Google Search.

Here are a list of DAM vendors.

Movies I’ve seen that premiered in 2014 on the Silver Screen or through digital media platforms

Here’s a list of movies I’ve seen in 2014, in no particular order.

  1. The LEGO Movie (2014)
  2. X-Men: Days of Future Past (2014) – Awesome
  3. Dawn Of The Planet Of The Apes (2014)
  4. Guardians of the Galaxy (2014) – Awesome
  5. Live Die Repeat: Edge of Tomorrow (2014) – Awesome
  6. Captain America: The Winter Soldier (2014) – Mostly Awesome
  7. I Frankenstein – just saw last night, thanks Netflix
  8. Monuments Men
  9. 300: Rise of an Empire – entertaining
  10. The Amazing Spiderman 2 – strayed from the mother ship
  11. Maleficent – well done
  12. Lucy – entertaining
  13. Teenage Mutant Ninja Turtles
  14. The Maze Runner – entertaining, and slightly thought provoking
  15. Interstellar
  16. Mockingjay Part 1 – entertaining
  17. Exodus Gods And Kings- very entertaining
  18. The Hobbit 3 – well done
  19. Night At The Museum 3
  20. The Imitation Game  –  entertaining
  21. Into the Woods – good cast
  22. Nightcrawler

There are tons of movies from this year I haven’t seen, and don’t even know exist.  Maybe, Amazon Instant, Netflix, etc. will help make me aware of the movies I missed.  Maybe even post these movies for viewing soon.

Source list provided by Rotten Tomatoes and Wild about Movies.

Any others you can recommend?