Tag Archives: Amazon Web Services

Time Lock Access: Seal Files in Cloud Storage

Is there value in providing users the ability to apply “Time Lock Access” to files in cloud storage?  Files are securely uploaded by their Owner.  After upload no one, including the Owner, may access / open the file(s).   Only after the date and time provided for the time lock passes, files will be available for access, and action may be taken, e.g.  Automatically email a link to the files.  More complex actions may be attached to the time lock release such as script execution using a simple set of rules as defined by the file Owner.

Solution already exists?  Please send me a link to the cloud integration product / plug in.

As a Data Deluge Grows, Companies Rethink Storage

At Pure Storage, a device introduced on Monday holds five times as much data as a conventional unit.

  • IBM estimates that by 2020 we will have 44 zettabytes — the thousandfold number next up from exabytes — generated by all those devices. It is so much information that Big Blue is staking its future on so-called machine learning and artificial intelligence, two kinds of pattern-finding software built to cope with all that information.
  • Pure Storage chief executive, Scott Dietzen, “No one can look at all their data anymore; they need algorithms just to decide what to look at,”

Source: As a Data Deluge Grows, Companies Rethink Storage – The New York Times

Additional Editorial:

Pure Storage is looking to “compress” the amount of data that can be stored in a Storage Array using Flash Memory, “Flashblade”.   They are also tuning the capabilities of the solution for higher I/O throughput, and optimized, addressable storage.

Several companies with large and growing storage footprints have already begin to customize their storage solutions to accommodate the void in this space.

Building more storage arrays is a temporary measure while the masses of people, or fleets of cars turn on their IoT enabled devices.

Data is flooding the Internet, and innumerable, duplicate ‘objects’  of information, requiring redundant storage, are prevalent conditions. A registry, or public ‘records’ may be maintained.   Based on security measures, and the public’s appetite determine what “information objects” may be centrally located.  As intermediaries, registrars may build open source repositories, as an example, using Google Drive, or Microsoft Azure based on the data types of ‘Information Objects”

  • Information object registrars may contain all different types of objects, which indicate where data resides on the Internet.
    • vaguely similar to Domain name registrar hierarchy
    • another example, Domain Name System (DNS) is the best example of the registration process I am suggesting to clone and leverage for all types of data ranging from entertainment to medical records.
  • Medical “Records”, or Medical “Information Objects”
    • X-ray images, everything from dental to medical, and correlating to other medical information object(s),
  • Official ‘Education’ records from K-12 and beyond, e.g. degrees and certifications achieved;
  • Secure, easy access to ‘public’ ‘information objects’ by the owner, and creator.  Central portal(s) driving user traffic.  Enables ‘owner’ of records to take ‘ownership’ of their health, for example

Note: there are already ‘open’ platforms being developed and used for several industries including medical; with limed access.  However, the changes I’m proposing imposes a ‘registrar’ process whereby portals of information are registered, and are interwoven, linking to one another.

It’s an issue of excess weight upon the “Internet”, and not just the ‘weight’ of unnecessary storage, the throughput within a weaved set of networks as well.

Think of it in terms of opportunity cost.  First quantify what an ‘information object’, or ‘block of data’ equates to in cost.  It seems there must already be a measurement in existence, a medium amount to charge / cost per “information object”.  Finally, for each information object type, e.g. song, movie, news story, technical specifications, etc. identify how many times this exact object is perpetuated in the Internet.

Steps on reducing  data waste:

  • Without exception, each ‘information object’ contains an (XML) meta data file.
  • Each of the attributes describing information objects are built out as these assets are being used; e.g. proactive autopopulate search, and using an AI Induction engine
  • X out of Y metadata type and values are equivalent
    • the more attributes correlate to one or more objects, the more likely these objects are
      • related on some level, e.g. sibling, cousin
      • or identical objects, and may need meta relationship update
    • the metadata encapsulates the ‘information object’

Another opportunity to organize “Information Asset Objects” would be to leverage the existing DNS platform for managing “Information Asset Repositories”.   This additional Internet DNS structure would enable queries across information asset repositories.   Please see “So Much Streaming Music, Just Not in One Place”  for more details.

Bitcoin Exchange, and Practical Usage: What Can I get for a Bitcoin?

I was reading the article, As Big Investors Emerge, Bitcoin Gets Ready for Its Close-Up, and am amazed how far people are taking Bitcoin as a real currency.  I read in the article that there are investors paying substantial real money to acquire Bitcoins.  The article states they hope retail places like Starbuck or Amazon may accept this currency.

When I go to Marketwatch.com and compare the currency exchange rates, GBP: USD, for example, I’d like to see this currency listed so I understand the actual value, the futures of this ‘foreign’ currency.  There are many economic questions regarding the creation of a currency, and the belief in that currency.  Look at Greece and the Euro, the Peso, the Loonie, speaking of, it seems that there may be a market for bitcoins, but not in the traditional sense that ‘physical’ goods are currently exchanged.

Bitcoins will be traded for the use of cloud resources and services, such as computation cycles, and other cloud applications and resources.  If Grid Computing, where users allow the utilization of their computation resources as I suggest in my Post Grid and Cloud Computing Going Head to Head: Profit for You, then both Cloud and Grid computing can trade in Bitcoins,and what they buy is cloud resource utilization.  An exchange may exist so people can trade,  and the value of these coins may have value, allowing for the  ‘tangible’ purchase of computation resource, which may actually mean something.

This approach gets muddied when you are able to apply cloud printing resources which print 3D ‘physical’ goods.    I would have to see major cloud players, such as Amazon to allow for the acceptance of these coins.

For starters, I can see, if you acquire an Amazon Visa or Mastercard, instead of the points system where the cardholder gets reward points, they can be allocated Bitcoins.  Amazon would have to acquire real Bitcoins, and an exchange would have to be established, so if Amazon’s clients are distributed Bitcoins, they are given the proper allocation, e.g. 1 USD to N Bitcoins.  Anyway, Starbucks coffee for thought.

Here are a few other ideas for Bitcoin Applications:

  • Atlantic City, Las Vegas, or Other Physical Casino slot machines that accept and pay out in Bitcoins
  • On line gaming, such as on line poker or slots, that accept and pay out Bitcoins
  • Affinity Card programs that pay out in Bitcoins, according to their own standard, anything from On line stores like Amazon, to Computer Electronics Brick and Mortar stores to Credit Cards
  • PayPal, or other intermediary transaction firms that allow their customers to send and receive Bitcoins as payment.  The transaction intermediary firm may have an independent account for Bitcoins specifically, exclusive from other currencies.

Grid and Cloud Computing Going Head to Head: Profit for You

I was thinking about what was around before cloud computing.  I thought about mainframes and allocated computing cycles, then I thought about the SETI @ Home project with it’s transformation to grid or shared computing with Boinc.  Why did this seem to go by the wayside, or not maximized to become a secure cloud hosted by servers throughout the world.  A charge back model could have been created to allow users to receive monetary value for their compute cycles.  There are traditional answers which have halted it’s progress, however, there is a business model that allows anyone with a web host shared or leased, to turn a profit, such as Bloggers.

The world, from a personal computing standpoint, has progressed to laptops which have a highly utilized hibernate mode, which does not lend itself to leverage available compute cycles, because computers and the human processes that use computers are more efficient.  Laptops are just as powerful as our ‘old’ servers, and so our servers for project use have been relegated solely to the world of academia.

Although, I find extremely interesting, there is an opportunity where grid computing can have life once again, through blog hosted servers.  People who have blogs, which are hosted on servers other than WordPress.com or Google’s Blogger, have lower compute requirements for posting and serving up text and media then traditional apps hosted on web servers.  Hosted bloggers should be able to identify their utilization of their server, and calculate the ability to ‘lend’ server time.  In addition, a WordPress Plugin, for example, may be created as a User Interface, as well as a Boinc application interface.  A web server version of Boinc and a deployment binary package would need to be created and deployed on your web server.  At that point, WordPress APIs crafted as a plugin can be used to invoke the processing. Additional plugins or widgets for WordPress would allow for:

  • A widget on a blog side bar to display the results of a project your site ascribed to for grid computing, such as dynamic, refreshed charts and graphs
  • A plugin to embed short codes on blog pages to derive any information from the Boinc app client hosted on your Web Server.
  • A widget that allows YOUR customers to sign up, and short codes to display your charge back rates for allocation of your data streaming and CPU time.

Any project listed on the GridRepublic, or linked to by the Boinc Client from Berkeley is a potential client for your shared computing resource.  In fact, anyone, such as a game developer looking to lease cloud computing and storage resources may be a client.

The Boinc client hosted in a web server may, if engineered to parallel process, integrate in a cooperative of web hosted blog sites, for faster computing, and higher revenue margins.  This would be a phase two to the project, dividing up computing requirements to multiple servers.  An open source project for affiliate networking, and even Google Wallet, or coincidentally, PayPal, an Amazon company, may be used collect and then allocate funds based on a charge back formula to ‘affiliate’ web hosted blogs.  And this has never been tried before because?  Comments welcome.

Google Takes Cloud Computing Services; Adds Android APIs to SDK

Google Takes on Amazon and Microsoft for Cloud Computing Services – NYTimes.com.  This is the first bit of ‘news’ I’ve read all week with regard to Technology.  The article holds no suprises, but a good read to the uninformed.  I am unable to disagree, or apply additional insight to this article.  Amazon has a strong lead, and as I mentioned last year, I saw Google getting into this space with it’s software and APIs available.  It may have needed the manpower, and or infrastructure to build the back end to support the extensibility of the front end.  Google also may offer new business models to complement it’s existing API offerings, as well as expand those APIs, and provide user friendly tools.  I’d see, from this article, an Android API SDK extensibility to grab market share from Amazon.  The article quotes that Android applications are using AWS, so if Google adds Android APIs to it’s SDK, it would give developers an easy, plug in option.

Solving the Corporate and Personal Data Dilemma for Mobile Devices

After reading this article in the New York Timers, I.T. Managers Struggle to Contain Corporate Data in the Mobile Age – NYTimes.com, regarding employees using their mobile devices for both corporate and personal use, I pulled apart these challenges one by one.

One of the challenges mentioned is corporate and personal applications, potentially malware running on the same mobile OS, and this is not a new problem.  Typically, companies provide the hardware, and lock down the PCs where non-corporate software installed is a violation of corporate policy, and there are even at times, nightly corporate programs that go on the network, find these non-corporate applications and remove them.  One solution to the mobile devices in the corporate world is a similar approach whereby the mobile OS vendors allow corporations to apply this approach, if the corporation is providing the mobile hardware.

If the company is not providing the mobile device, an effective way partitioning data in the PC world could also be applied to the moble OS world, multiple boot partitions, just like a VM image. As the mobile hardware gets more robust, such as more processors and more, and fast RAM, this solution should be extremely feasible, and allow for the partitioning of corporate data.

There was also an implied usage of personal application using the bandwidth of corporate data, not so much an issue, however, this too can be solved with the traditional PC approach of a Proxy and Firewall approach, i.e. know, acceptable, published ports for approved applications.

In short, a multiple, dual, (or even mobile) OS boot, just like a virtual machine, whereby when you start up your mobile device, you select personal, or corporate image (or corporate 1, 2, etc), and even the image of the mobile OS could be housed in a clould architecture, which I have mentioned in the previous article I posted last year, Elastic Computing for Mobile Devices: Mobile OS Hosting Maximizing Computing Capacity

 

EMC’s Documentum Competition for Google Docs in the SaaS space?

I was just curious if we would see the positioning of EMC’s Documentum as Software as a Service to compete with the likes of Google Docs, or will we see them continue to position for the Enterprise level private cloud model?  It would be great to hear your thoughts.  The Document Management Suite was an amazing full featured workflow document system, why not bring that to the forefront of the consumer market as a public cloud SaaS targeting small to mid sized markets, as well as the individual.  There are several profitable models where they would achieve significant margins, including accounting for the price to enter the market.

***

Update:

When I first started commenting on Digital Assent Management solutions, I did not include the vast amounts of existing solutions by vendors in this space.  Below are lists of the DAM software vendors which are already in place today:

Digital Asset Management VendorsDigital Asset Management Vendors Directory

I’ve had hands on experience with several DAM products, including Documentum and SharePoint.  I have no idea why I did not include SharePoint for DAM evaluation.

 

Tablet Developers Make Business Intelligence Tools using Google as a Data Warehouse: Completing with Oracle, IBM, and Microsoft SQL Server

And, he shoots, and scores.  I called it, sort of.  Google came out of the closet today as a data warehouse vendor, at least they need a community of developers to connect the dots to help build an amazing Business Intelligence suite.

Google came out with a Google Docs API today, which using languages from Objective-C (iOS), C#, to Java so you can use Google as your Data Warehouse for any size business. All you need to do is write an ETL program which uploads and downloads tables from your local database to Google Docs, and you create your own Business Intelligence User Interface for the creation and viewing of Charts & Graphs.  It looks like they’ve changed strategies, or this was the plan all along.

Initially I thought that Google Fusion was going to be the table editing tool to manipulate your data that was transferred from your transactional database using the Google Docs API.  Today they released a Google Docs API and developers can create their own ETL drivers and a Business Intelligence User Interface that can run on any platform from an Android Tablet, iPad, or Windows Tablet.

A few days ago, I wrote the article, which looked like they were going to use a tool called Google Fusion, which was in Beta at the time to manipulate tabular data, and eventually extend it to create common BI components, such as graphs, charts, edit tables, etc.

A few gotchas: Google Docs on Apple iPad is version 1.1.1 released 9/28/12, so we are talking very early days, and the Google Docs API was released today.   I would imagine since you can also use C#, someone can make a Windows application on the desktop to manipulate the data tables, create and view graphs, so a Windows Tablet can be used.  The API also has Java compatibility, so from any Unix box, or any platform, Java is write once, run anywhere, wherever your transitional database lives, a developer is able to write a driver to transfer the data to Google Docs dynamically, and then use Google Docs API for Business Intelligence.  You can even write an ETL driver which all it does is rapidly transfer data, like an ODBC, or JDBC driver and use any business intelligence tools you have on your desktop, or a nightly ETL.  However, I can see developers creating business intelligence tools on Android, iPad, or Windows tables to modify tables, create and view charts, etc., using custom BI tool sets and their data warehouse now becomes Google Docs.

Please reference an article I wrote a few days back, “Google is Going to be the Next Public and Private Data Warehouse“.

At that time, Google Fusion was marked as Beta on 10/13/2012.  Google has since stripped off the word Beta, but doesn’t matter.  Its even better with the Google API to Google Docs.  Google Fusion could be your starter User Interface, however, if your Android, iOS (Apple iPad), and Windows developers really embrace this API, all of the big database companies like IBM, Oracle, and Microsoft may have their market share eroded to some extent, if not a great extent.

Update 10/19:

Hey Gs (Guys and Gals), I forgot to mention, you can also make your own video or music streaming applications perhaps, using the basic calls of get and receive file other companies are already doing such as AWS, Box, etc. It’s a simple get / send API, so not sure if it’s applicable to ‘streaming’ at this stage, just another storage location in the ‘cloud’, which would be quite boring.  Although thinking of it now, aren’t all the put / send cloud solutions potential data warehouses using ETL and the APIs discussed and published above?  Also, it’s ironic that Google would also be competing with itself, if it was a file share, ‘stream’ videos, and YouTube?