Tag Archives: Digital Asset Management

Information Architecture: An Afterthought for Content Creation Solutions

Maximizing Digital Asset Reuse

Many applications that enable users to create their own content from word processing to graphics/image creation have typically relied upon 3rd party Content Management Solutions (CMS) / Digital Asset Management (DAM) platforms to collect metadata describing the assets upon ingestion into their platforms.  Many of these platforms have been “stood up” to support projects/teams either for collaboration on an existing project, or reuse of assets for “other” projects.  As a person constantly creating content, where do you “park” your digital resources for archiving and reuse?  Your local drive, cloud storage, or not archived?

Average “Jane” / “Joe” Digital Authors

If I were asked for all the content I’ve created around a particular topic or group of topics from all my collected/ingested digital assets, it may be a herculean search effort spanning multiple platforms.  As an independent creator of content, I may have digital assets ranging from Microsoft Word documents, Google Sheets spreadsheets, Twitter tweets,  Paint.Net (.pdn) Graphics, Blog Posts, etc.

Capturing Content from Microsoft Office Suite Products

Many of the MS Office content creation products such as Microsoft Word have minimal capacity to capture metadata, and if the ability exists, it’s subdued in the application.  MS Word, for example, if a user selects “Save As”, they will be able to add/insert “Authors”, and Tags.  In Microsoft Excel, latest version,  the author of the Workbook has the ability to add Properties, such as Tags, and Categories.  It’s not clear how this data is utilized outside the application, such as the tag data being searchable after uploaded/ingested by OneDrive?

Blog Posts: High Visibility into Categorization and Tagging

A “blogging platform”, such as WordPress, places the Category and Tagging selection fields right justified to the content being posted.  In this UI/UX, it forces a specific mentality to the creation, categorization, and tagging of content.  This blogging structure constantly reminds the author to identify the content so others may identify and consume the content.  Blog post content is created to be consumed by a wide audience of interested viewers based on those tags and categories selected.

Proactive Categorization and Tagging

Perpetuate content classification through drill-down navigation of a derived Information Architecture Taxonomy.  As a “light weight” example, in WordPress, the Tags field when editing a Post, a user starts typing in a few characters, an auto-complete dropdown list appears to the user to select one or more of these previously used tags.  Excellent starting point for other Content Creation Apps.

Users creating Blog Posts can define a Parent/Child hierarchy of categories, and the author may select one or more of relevant categories to be associated with the Post.

Artificial Intelligence (AI) Derived Tags

It wouldn’t be a post without mentioning AI.  Integrated into applications that enable user content creation could be a tool, at a minimum, automatically derives an “Index” of words, or tags.  The way in which this “intelligent index” is derived may be based upon:

  • # of times word occurrence
  • mention of words in a particular context
  • reference of the same word(s) or phrases in other content
    • defined by the same author, and/or across the platform.

This intelligently derived index of data should be made available to any platforms that ingest content from OneDrive, SharePoint, Google Docs, etc.  These DAMs ( or Intelligent Cloud Storage) can leverage this information for any searches across the platforms.

Easy to Retrieve the Desired Content, and Repurpose It

Many Content Creation applications heavily rely on “Recent Accessed Files” within the app.  If the Information Architecture/Taxonomy hierarchy were presented in the “File Open” section, and a user can drill down on select Categories/Subcategories (and/or tags), it might be easier to find the most desired content.

All Eyes on Content Curation: Creation to Archive
  • Content creation products should all focus on the collection of metadata at the time of their creation.
  • Using the Blog Posting methodology, the creation of content should be alongside the metadata tagging
  • Taxonomy (categories, and tags with hierarchy) searches from within the Content Creation applications, and from the Operating System level, the “Original” Digital Asset Management solution (DAM), e.g. MS Windows, Mac

 

Popular Tweets from January and February 2018

Tweet Activity Analytics

Leveraging Twitter’s Analytics, I’ve extracted the Top Tweets from the last 57 day period (Jan 1 until today).   During that period, there were 46.8K impressions earned.

Summary:

  • 61 Link Clicks
  • 27 Retweets
  • 86 Likes
  • 34 Replies
Top Tweets for January and February 2018
Top Tweets for January and February 2018

Microsoft Productivity Suite – Content Creation, Ingestion, Curation, Search, and Repurpose

Auto Curation: AI Rules Engine Processing

There are, of course, 3rd party platforms that perform very well, are feature rich, and agnostic to all file types.  For example, within a very short period of time, low cost, and possibly a few plugins, a WordPress site can be configured and deployed to suit your needs of Digital Asset Managment (DAM).  The long-term goal is to incorporate techniques such as Auto Curation to any/all files, leveraging an ever-growing intelligent taxonomy, a taxonomy built on user-defined labels/tags, as well an AI rules engine with ML techniques.   OneDrive, as a cloud storage platform, may bridge the gap between JUST cloud storage and a DAM.

Ingestion and Curation Workflow

Content Creation Apps and Auto Curation

  • The ability for Content Creation applications, such as Microsoft Word, to capture not only the user-defined tags but also the context of the tags relating to the content.
    • When ingesting a Microsoft PowerPoint presentation, after consuming the file, and Auto Curation process can extract “reusable components” of the file, such as slide header/name, and the correlated content such as a table, chart, or graphics.
    • Ingesting Microsoft Excel and Auto Curation of Workbooks may yield “reusable components” stored as metadata tags, and their correlated content, such as chart and table names.
    • Ingesting and Auto Curation of Microsoft Word documents may build a classic Index for all the most frequently occurring words, and augment the manually user-defined tags in the file.
    • Ingestion of Photos [and Videos] into and Intelligent Cloud Storage Platform, during the Auto Curation process, may identify commonly identifiable objects, such as trees or people.  These objects would be automatically tagged through the Auto Curation process after Ingestion.
  • Ability to extract the content file metadata, objects and text tags, to be stored in a standard format to be extracted by DAMs, or Intelligent Cloud Storage Platforms with file and metadata search capabilities.  Could OneDrive be that intelligent platform?
  • A user can search for a file title or throughout the Manual and Auto Curated, defined metadata associated with the file.  The DAM or Intelligent Cloud Storage Platform provides both search results.   “Reusable components” of files are also searchable. 
    • For “Reusable Components” to be parsed out of the files to be separate entities, a process needs to occur after Ingestion Auto Curration.
  • Content Creation application, user-entry tag/text fields should have “drop-down” access to the search index populated with auto/manual created tags.

Auto Curation and Intelligent Cloud Storage

  • The intelligence of Auto Curation should be built into the Cloud Storage Platform, e.g. potentially OneDrive.
  • At a minimum, auto curation should update the cloud storage platform indexing engine to correlate files and metadata.
  • Auto Curation is the ‘secret sauce’ that “digests” the content to build the search engine index, which contains identified objects (e.g. tag and text or coordinates)  automatically
    • Auto Curation may leverage a rules engine (AI) and apply user configurable rules such as “keyword density” thresholds
    • Artificial Intelligence, Machine Learning rules may be applied to the content to derive additional labels/tags.
  • If leveraging version control of the intelligent cloud storage platform, each iteration should “re-index” the content, and update the Auto Curation metadata tags.  User-created tags are untouched.
  • If no user-defined labels/tags exist, upon ingestion, the user may be prompted for tags

Auto Curation and “3rd Party” Sources

In the context of sources such as a Twitter feed, there exists no incorporation of feeds into an Intelligent Cloud Storage.  OneDrive, Cloud Intelligent Storage may import feeds from 3rd party sources, and each Tweet would be defined as an object which is searchable along with its metadata (e.g. likes; tags).

Operating System, Intelligent Cloud Storage/DAM

The Intelligent Cloud Storage and DAM solutions should have integrated search capabilities, so on the OS (mobile or desktop) level, the discovery of content through the OS search of tagged metadata is possible.

Current State

  1. OneDrive has no ability to search Microsoft Word tags
  2. The UI for all Productivity Tools must have a comprehensive and simple design for leveraging an existing taxonomy for manual tagging, and the ability to add hints for auto curation
    1. Currently, Microsoft Word has two fields to collect metadata about the file.  It’s obscurely found at the “Save As” dialog.
      1. The “Save As” dialogue box allows a user to add tags and authors but only when using the MS Word desktop version.  The Online (Cloud) version of Word has no such option when saving to Microsoft OneDrive Cloud Storage
  3. Auto Curation (Artificial Intelligence, AI) must inspect the MS Productivity suite tools, and extract tags automatically which does not exist today.
  4. No manual taging or Auto Curation/Facial Recognition exists.

Using Google to Search Personal Data: Calendar, Gmail, Photos, and …

On June 16th, 2017,  post reviewed for relevant updates.

Reported by the Verge,  Google adds new Personal tab to search results to show Gmail and Photos content on May 26th.

Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.

I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.

It seems Google is rolling out this capability iteratively,  and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.

Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.

As initially reported by Search Engine Roundtable,  the output of Gmail results appear in a single column text output with links to the content, in this case email.

Google Personal Results
Google Personal Search Results –  Gmail

It appears the sequence of the “Personal Search” output:

  • Agenda (Calendar)
  • Photos
  • Gmail

Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.

Google Personal Search Calendar
Google Personal Search Results –  Calendar

 Group Permissions – Searching

Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly.  At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:

Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.

 

Learn More? Google Help

Click here  to learn more on, “Search results from your Google products”  At this time, according to this Google post:

You can search for information from other Google products like Gmail, Google Calendar, and Google+.


Dear Google [Search]  Product Owner,

I request Google Docs and Google Keep be in the next data sources to be enabled for the Personal search tab.

Best Regards,

Ian

 

AI Email Workflows Eliminate Need for Manual Email Responses

When i read the article “How to use Gmail templates to answer emails faster.”  I thought wow, what an 1990s throwback!

Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.

Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.

Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.

Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.

Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.

Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors.  Workaround, of course, using service accounts with vastly higher folder quota / size.

My opinions do not reflect that of my employer.

Applying Gmail Labels Across All Google Assets: Docs, Photos, Contacts + Dashboard, Portal View

Google applications contain [types of] assets,  either created within the application, or imported into the application.    In Gmail, you have objects, emails, and Gmail enables users to add metadata to the email in the form of tags or “Labels”.  Labeling emails is a very easy way to organize these assets, emails.   If you’re a bit more organized, you may even devise a logical taxonomy to classify your emails.

An email can also be put into a folder and this is completely different than what we are talking about with labels.  An email may be placed into a folder, and have a parent child folder hierarchy.  Only the name of the folder, and it’s correlations to positions in the hierarchy provide this relational metadata.

For personal use, or for small to medium size businesses, users may want to categorize  all of the Google “objects” from each Google App,  so why Isn’t there the capability to apply labels across all Google App assets?  If you work at a law firm, for example, and have documents in Google Docs, and use Google for email, it would be ideal to leverage a company wide taxonomy, and upon any internal search discover all objects logically grouped in a container by labels.

For each Google object asset, such as email in Gmail, users may apply N number of labels to each Google Object asset.

A [Google] dashboard, or portal view may be used to display and access Google assets across Google applications, grouped by Labels .  A Google Apps “Portal Search” may consist of queries that contain asset labels.  A  relational, Google object repository containing assets across all object types (e.g. Google Docs), may be leveraged to store metadata about each Google asset and their relationships.

A [Google] dashboard, or portal view may be organized around individuals (e.g. personal), teams, or an organization.  So, in a law firm, for example, a case number label could be applied to Google Docs,  Google Photos (i.e. Photos and Videos),  and of course, Gmail.

A relatively simple feature to be implemented with a lot of value for Google’s clients, us?  So, why isn’t it implemented?

One better, when we have facial recognition code implemented in Photos (and Videos), applying Google labels to media assets may allow for correlation of Emails to Photos with a rule based engine.

The Google Search has expanded into the mobile Google app.

Leveraging Google “Cards“, developers may create “Cards” for a single or group of Google assets.   Grouping of Google assets may be applied using “Labels”.   As Google assets go through a business or personal user workflow, additional metadata may be added to the asset, such as additional “Labels”.

Expanding upon this solution,  scripts may be created to “push” assets through a workflow, perhaps using Google Cloud Functions.  Google “Cards” may be leveraged as “the bit” that informs users when they have new items to process in a workflow.

Metadata, or Labels, may be used such as “Document Ready for Legal Review” or “Legal Document Review Completed”.

Google Introduces their Cloud, Digital Asset Management (DAM) solution

Although this is a saturated space, with many products, some highly recommended, I thought this idea might interest those involved in the Digital Asset Management space.  Based on the maturity of existing products, and cost, it’s up to you, build or buy.  The following may provide an opportunity for augmenting existing Google products, and overlaying a custom solution.

Google products can be integrated across their suite of solutions and may produce a cloud based, secure, Digital Asset Management, DAM solution.   In this use case, the digital assets are Media (e.g. videos, still images)

A Google DAM may be created by leveraging existing features of Google Plus, Google Drive, YouTube, and other Google products, as well as building / extending additional functionality, e.g. Google Plus API, to create a DAM solution.   An over arching custom framework weaves these products together to act as the DAM.

Google Digital Asset Management (New)

  1. A dashboard for Digital Asset Management should be created, which articulates, at a glance, where project media assets are in their life cycle, e.g. ingestion, transcoding, editing media, adding meta data, inclusion / editing of closed captions, workflow approvals, etc.
  2. Creation and maintenance of project asset folder structure within storage such as Google Drive for active projects as well as Google Cloud Storage for archived content.  Ingested content to arrive in the project folders.
  3. Ability to use [Google YouTube] default encoding / transcoding functionality, or optionally leverage alternate cloud accessible transcoding solutions.
  4. A basic DAM UI may provide user interaction with the project and asset meta data.
  5. Components of the DAM should allow plug in integration with other components on the  market today, such as an ingestion solution.

Google Drive and Google Cloud Storage.  Cloud storage offers large quantities of storage e.g. for Media (video, audio), economically.

  1. Google Drive ingestion of assets may occur through an automated process, such as a drop folder within an FTP site.  The folder may be polled every N seconds by the Google DAM orchestration, or other 3rd party orchestration product, and ingested into Google Drive.  The ingested files are placed into a project folder designated by the accompanying XML meta file.
  2. The version control of assets, implemented by Google Drive and the DAM to facilitate collaboration and approval.
  3. Distribution and publishing media to designated people and locations, such as to social media channels, may be automatically triggered by DAM orchestration polling Google Drive custom meta data changes.   On demand publishing is also achievable through the DAM.
  4. Archiving project assets to custom locations, such as Google Cloud solution, may be triggered by a project meta data status modification, or on demand through the DAM.
  5. Assets may be spawned into other assets, such as clips.  Derived child assets are correlated with the master, or parent asset within the DAM asset meta data to trace back to origin.  Eliminates redundancy of asset, enabling users to easily find related files and reuse all or a portion of the asset.

Google Docs

  1. Documents required to accompany each media project, such as production guidelines, may go through several iterations before they are complete.  Many of the components of a document may be static.  Google Docs may incorporate ‘Document Assembly’ technology for automation of document construction.

Google’s YouTube

  1. Editing media either using default YouTube functionality, or using third party software, e.g. Adobe suite
  2. Enable caption creation and editing  may use YouTube or third party software.
  3. The addition & modification of meta data according to the corporate taxonomy may be added or modified through [custom] YouTube fields, or directly through the Google DAM Db where the project data resides.

Google’s Google Plus +

  1. G+ project page may be used for project and asset collaboration
  2. Project team members may subscribe to the project page to receive notifications on changes, such as new sub clips
  3. Asset workflow notifications,  human and automated:
    1. Asset modification approvals (i.e. G+ API <- -> DAM Db) through custom fields in G + page
    2. Changes to assets (i.e. collaboration) notifications,
    3. [Automated] e.g. ingestion in progress, or completed updates.
    4. [Automated] Process notifications: e.g. ‘distribution to XYZ’ and ‘transcoding N workflow’.  G + may include links to assets.
  4. Google Plus for in-house, and outside org. team(s) collaboration
  5. G + UI may trigger actions, such as ingestion e.g.  by specifying a specific Google Drive link, and a configured workflow.

Google Custom Search

  1. Allows for the search of assets within a project, within all projects within a silo of business, and across entire organization of assets.
  2. Ability to find and share DAM motion pictures, still images, and text assets with individuals, groups, project teams in or outside the organization.  Google Plus to facilitate sharing.
  3. Asset meta data will e.g. describe how the assets may be used for distribution, digital distribution rights.   Users and groups are implemented within G+, control of asset distribution may be implemented in Google Plus, and/or custom Google Search.

Here are a list of DAM vendors.