Popular Tweets from January and February 2018

Tweet Activity Analytics

Leveraging Twitter’s Analytics, I’ve extracted the Top Tweets from the last 57 day period (Jan 1 until today).   During that period, there were 46.8K impressions earned.

Summary:

  • 61 Link Clicks
  • 27 Retweets
  • 86 Likes
  • 34 Replies
Top Tweets for January and February 2018
Top Tweets for January and February 2018

Information Architecture (IA): the Classification of Information (Part 2)

Karl Smith has created this timeless post on Information Architecture, which is still relevant today. The below is an excerpt of his article I found relevant to the foundation of IA.


To Each His Own

Different groups of individuals have a very specific context of use when looking for content, the descriptions they use and understand to find it and their underlying purpose in doing so. In this case, they will each require a separate structure around an entity and may require their own version of the taxonomy.

Atomic Unit of Information

Define ‘What is the smallest component of viable (useful) information?’ and use that to model the information system. I have worked with several huge education providers and universities and the questions I ask is ‘What is a course?’;

  • A course has a title
  • A course has duration, with a start and an end
  • A course has a subject
  • A course has a level
  • A course has prerequisites
  • A course has an outcome, which leads to options
  • A course has a delivery mechanism

I also ask, ‘Who is a student?’, ‘Who is a tutor?’, ‘What is an outcome?’ even ‘What is a college?’, if a course has a regular location then this creates a second set of entities.

  • A location has an address, telephone number, email address
  • A location has facilities
  • A location has transportation links
  • A location has a community
  • A location has accommodation

And it goes on and on, this is Information Architecture 101.

Source: Information Architecture (IA) the classification of information Part 2 – Karl Smith

Microsoft Productivity Suite – Content Creation, Ingestion, Curation, Search, and Repurpose

Auto Curation: AI Rules Engine Processing

There are, of course, 3rd party platforms that perform very well, are feature rich, and agnostic to all file types.  For example, within a very short period of time, low cost, and possibly a few plugins, a WordPress site can be configured and deployed to suit your needs of Digital Asset Managment (DAM).  The long-term goal is to incorporate techniques such as Auto Curation to any/all files, leveraging an ever-growing intelligent taxonomy, a taxonomy built on user-defined labels/tags, as well an AI rules engine with ML techniques.   OneDrive, as a cloud storage platform, may bridge the gap between JUST cloud storage and a DAM.

Ingestion and Curation Workflow

Content Creation Apps and Auto Curation

  • The ability for Content Creation applications, such as Microsoft Word, to capture not only the user-defined tags but also the context of the tags relating to the content.
    • When ingesting a Microsoft PowerPoint presentation, after consuming the file, and Auto Curation process can extract “reusable components” of the file, such as slide header/name, and the correlated content such as a table, chart, or graphics.
    • Ingesting Microsoft Excel and Auto Curation of Workbooks may yield “reusable components” stored as metadata tags, and their correlated content, such as chart and table names.
    • Ingesting and Auto Curation of Microsoft Word documents may build a classic Index for all the most frequently occurring words, and augment the manually user-defined tags in the file.
    • Ingestion of Photos [and Videos] into and Intelligent Cloud Storage Platform, during the Auto Curation process, may identify commonly identifiable objects, such as trees or people.  These objects would be automatically tagged through the Auto Curation process after Ingestion.
  • Ability to extract the content file metadata, objects and text tags, to be stored in a standard format to be extracted by DAMs, or Intelligent Cloud Storage Platforms with file and metadata search capabilities.  Could OneDrive be that intelligent platform?
  • A user can search for a file title or throughout the Manual and Auto Curated, defined metadata associated with the file.  The DAM or Intelligent Cloud Storage Platform provides both search results.   “Reusable components” of files are also searchable. 
    • For “Reusable Components” to be parsed out of the files to be separate entities, a process needs to occur after Ingestion Auto Curration.
  • Content Creation application, user-entry tag/text fields should have “drop-down” access to the search index populated with auto/manual created tags.

Auto Curation and Intelligent Cloud Storage

  • The intelligence of Auto Curation should be built into the Cloud Storage Platform, e.g. potentially OneDrive.
  • At a minimum, auto curation should update the cloud storage platform indexing engine to correlate files and metadata.
  • Auto Curation is the ‘secret sauce’ that “digests” the content to build the search engine index, which contains identified objects (e.g. tag and text or coordinates)  automatically
    • Auto Curation may leverage a rules engine (AI) and apply user configurable rules such as “keyword density” thresholds
    • Artificial Intelligence, Machine Learning rules may be applied to the content to derive additional labels/tags.
  • If leveraging version control of the intelligent cloud storage platform, each iteration should “re-index” the content, and update the Auto Curation metadata tags.  User-created tags are untouched.
  • If no user-defined labels/tags exist, upon ingestion, the user may be prompted for tags

Auto Curation and “3rd Party” Sources

In the context of sources such as a Twitter feed, there exists no incorporation of feeds into an Intelligent Cloud Storage.  OneDrive, Cloud Intelligent Storage may import feeds from 3rd party sources, and each Tweet would be defined as an object which is searchable along with its metadata (e.g. likes; tags).

Operating System, Intelligent Cloud Storage/DAM

The Intelligent Cloud Storage and DAM solutions should have integrated search capabilities, so on the OS (mobile or desktop) level, the discovery of content through the OS search of tagged metadata is possible.

Current State

  1. OneDrive has no ability to search Microsoft Word tags
  2. The UI for all Productivity Tools must have a comprehensive and simple design for leveraging an existing taxonomy for manual tagging, and the ability to add hints for auto curation
    1. Currently, Microsoft Word has two fields to collect metadata about the file.  It’s obscurely found at the “Save As” dialog.
      1. The “Save As” dialogue box allows a user to add tags and authors but only when using the MS Word desktop version.  The Online (Cloud) version of Word has no such option when saving to Microsoft OneDrive Cloud Storage
  3. Auto Curation (Artificial Intelligence, AI) must inspect the MS Productivity suite tools, and extract tags automatically which does not exist today.
  4. No manual taging or Auto Curation/Facial Recognition exists.

Politics around Privacy: Implementing Facial and Object Recognition

This Article is Not…

about deconstructing existing functionality of entire Photo Archive and Sharing platforms.

It is…

to bring an awareness to the masses about corporate decisions to omit the advanced capabilities of cataloguing photos, object recognition, and advanced metadata tagging.

Backstory: The Asks / Needs

Every day my family takes tons of pictures, and the pictures are bulk loaded up to The Cloud using Cloud Storage Services, such as DropBox, OneDrive,  Google Photos,  or iCloud.  A selected set of photos are uploaded to our favourite Social Networking platform (e.g. Facebook, Instagram, Snapchat,  and/or Twitter).

Every so often, I will take pause, and create either a Photobook or print out pictures from the last several months.  The kids may have a project for school to print out e.g. Family Portrait or just a picture of Mom and the kids.  In order to find these photos, I have to manually go through our collection of photographs from our Cloud Storage Services, or identify the photos from our Social Network libraries.

Social Networking Platform Facebook

As far as I can remember the Social Networking platform Facebook has had the ability to tag faces in photos uploaded to the platform.  There are restrictions, such as whom you can tag from the privacy side, but the capability still exists. The Facebook platform also automatically identifies faces within photos, i.e. places a box around faces in a photo to make the person tagging capability easier.  So, in essence, there is an “intelligent capability” to identify faces in a photo.  It seems like the Facebook platform allows you to see “Photos of You”,  but what seems to be missing is to search for all photos of Fred Smith, a friend of yours, even if all his photos are public.    By design, it sounds fit for the purpose of the networking platform.

Auto Curation

  1. Automatically upload new images in bulk or one at a time to a Cloud Storage Service ( with or without Online Printing Capabilities, e.g. Photobooks) and an automated curation process begins.
  2. The Auto Curation process scans photos for:
    1. “Commonly Identifiable Objects”, such as #Car, #Clock,  #Fireworks, and #People
    2. Auto Curation of new photos, based on previously tagged objects and faces in newly uploaded photos will be automatically tagged.
    3. Once auto curation runs several times, and people are manually #taged, the auto curation process will “Learn”  faces. Any new auto curation process executed should be able to recognize tagged people in new pictures.
  3. Auto Curation process emails / notifies the library owners of the ingestion process results, e.g. Jane Doe and John Smith photographed at Disney World on Date / Time stamp. i.e. Report of executed ingestion, and auto curation process.

Manual Curation

After upload,  and auto curation process, optionally, it’s time to manually tag people’s faces, and any ‘objects’ which you would like to track, e.g. Car aficionado, #tag vehicle make/model with additional descriptive tags.  Using the photo curator function on the Cloud Storage Service can tag any “objects” in the photo using Rectangle or Lasso Select.

Curation to Take Action

Once photo libraries are curated, the library owner(s) can:

  • Automatically build albums based one or more #tags
  • Smart Albums automatically update, e.g.  after ingestion and Auto Curation.  Albums are tag sensitive and update with new pics that contain certain people or objects.  The user/ librarian may dictate logic for tags.

Where is this Functionality??

Why are may major companies not implementing facial (and object) recognition?  Google and Microsoft seem to have the capability/size of the company to be able to produce the technology.

Is it possible Google and Microsoft are subject to more scrutiny than a Shutterfly?  Do privacy concerns at the moment, leave others to become trailblazers in this area?

Akamai Cuts 5 Percent of Workforce as Q4 tops expectations | ZDNet

The company is cutting workers primarily in its media division as it aims to improve margins.

The media division, Akamai’s unit that speeds up Web pages (including Video streaming), saw fourth quarter revenue fall 3 percent.

Source: Akamai cuts 5 percent of workforce as Q4 tops expectations | ZDNet

Suprised?

Based on the conditions of the markets, i.e. the dissolution of Net Neutrality, companies like Akamai are primed to present attractive solutions to a bandwidth constrained market.  Akamai historically has been a market leader in this space, along with Amazon’s CloudFront solution.  So, I take pause by these actions, although on the surface Akamai has market dominance in this growth area, are there other potential impeding factors:

  • Akamai’s business operating plan needs to be retooled to compete with ever-increasing competitors into space once dominated.
  • Projected (i.e. inside information) regarding FCC regulations that will put Akamai at a market disadvantage.  Lobbyists!