Tag Archives: Memory

My Internal IRQ is Broken

Interrupt ReQuest (IRQ) is an hardware interrupt on a PC. There are 16 IRQ lines used to signal the CPU that a peripheral event has started or terminated. Except for PCI devices, two devices cannot use the same line. If a new expansion card is preset to the IRQ used by an existing board, one of them must be changed. This was an enormous headache in earlier machines.

Starting with the Intel 286 CPU in 1982, two 8259A controller chips were cascaded together and bumped the IRQs from 8 to 16. However, IRQ 2 is lost because it is used to connect to the second chip. IRQ 9 may be available for general use as most VGA cards do not require an IRQ.

PCI to the Rescue
The PCI bus solved the limited IRQ problem, as it allowed IRQs to be shared. For example, if there were only one IRQ left after ISA devices were assigned their required IRQs, all PCI devices could share it.

As a Data Deluge Grows, Companies Rethink Storage

At Pure Storage, a device introduced on Monday holds five times as much data as a conventional unit.

  • IBM estimates that by 2020 we will have 44 zettabytes — the thousandfold number next up from exabytes — generated by all those devices. It is so much information that Big Blue is staking its future on so-called machine learning and artificial intelligence, two kinds of pattern-finding software built to cope with all that information.
  • Pure Storage chief executive, Scott Dietzen, “No one can look at all their data anymore; they need algorithms just to decide what to look at,”

Source: As a Data Deluge Grows, Companies Rethink Storage – The New York Times

Additional Editorial:

Pure Storage is looking to “compress” the amount of data that can be stored in a Storage Array using Flash Memory, “Flashblade”.   They are also tuning the capabilities of the solution for higher I/O throughput, and optimized, addressable storage.

Several companies with large and growing storage footprints have already begin to customize their storage solutions to accommodate the void in this space.

Building more storage arrays is a temporary measure while the masses of people, or fleets of cars turn on their IoT enabled devices.

Data is flooding the Internet, and innumerable, duplicate ‘objects’  of information, requiring redundant storage, are prevalent conditions. A registry, or public ‘records’ may be maintained.   Based on security measures, and the public’s appetite determine what “information objects” may be centrally located.  As intermediaries, registrars may build open source repositories, as an example, using Google Drive, or Microsoft Azure based on the data types of ‘Information Objects”

  • Information object registrars may contain all different types of objects, which indicate where data resides on the Internet.
    • vaguely similar to Domain name registrar hierarchy
    • another example, Domain Name System (DNS) is the best example of the registration process I am suggesting to clone and leverage for all types of data ranging from entertainment to medical records.
  • Medical “Records”, or Medical “Information Objects”
    • X-ray images, everything from dental to medical, and correlating to other medical information object(s),
  • Official ‘Education’ records from K-12 and beyond, e.g. degrees and certifications achieved;
  • Secure, easy access to ‘public’ ‘information objects’ by the owner, and creator.  Central portal(s) driving user traffic.  Enables ‘owner’ of records to take ‘ownership’ of their health, for example

Note: there are already ‘open’ platforms being developed and used for several industries including medical; with limed access.  However, the changes I’m proposing imposes a ‘registrar’ process whereby portals of information are registered, and are interwoven, linking to one another.

It’s an issue of excess weight upon the “Internet”, and not just the ‘weight’ of unnecessary storage, the throughput within a weaved set of networks as well.

Think of it in terms of opportunity cost.  First quantify what an ‘information object’, or ‘block of data’ equates to in cost.  It seems there must already be a measurement in existence, a medium amount to charge / cost per “information object”.  Finally, for each information object type, e.g. song, movie, news story, technical specifications, etc. identify how many times this exact object is perpetuated in the Internet.

Steps on reducing  data waste:

  • Without exception, each ‘information object’ contains an (XML) meta data file.
  • Each of the attributes describing information objects are built out as these assets are being used; e.g. proactive autopopulate search, and using an AI Induction engine
  • X out of Y metadata type and values are equivalent
    • the more attributes correlate to one or more objects, the more likely these objects are
      • related on some level, e.g. sibling, cousin
      • or identical objects, and may need meta relationship update
    • the metadata encapsulates the ‘information object’

Another opportunity to organize “Information Asset Objects” would be to leverage the existing DNS platform for managing “Information Asset Repositories”.   This additional Internet DNS structure would enable queries across information asset repositories.   Please see “So Much Streaming Music, Just Not in One Place”  for more details.

The Addiction, The Thinker: Compute Processing and the Human Condition

True story: I am in a therapy group for depression, and why am I depressed?  I think I am about to rediscover the major issue in past, present and hopefully future societies. I have a problem, an addiction, but not to speed, or any other kind of illegal narcotic. It’s raw processing power, compute cycles, random access memory, storage, that is my speed, and I constantly want more, but is this just my problem, or a human condition?

This morning, I was hungry, not for breakfast, but at first, it was a faster phone.  Should I upgrade to the Samsung S4 before my contract is up?  I wanted more, and I thought of ways beyond measure how to get more, more computing speed, at first pure physics, and mechanical.  Then I looked toward nature, and wanted to draw from processes in nature that were able to compute at faster speeds, then I thought of integration, ok, pull back. I was hungry, and now beyond rational thought.  I went back and thought of crunching the ways I could tune a computer, a mobile device to maximize throughput, but that wasn’t enough, I needed to have the raw processing power locally, I needed to feel connected, right out of the first Star Trek Movie, that connection.  I was where the road met the rubber room so to speak.  It was nonsense, but I stopped and thought, what if we could do that, where would it end, just like an mountain that you could never quite climb high enough, and then, if, the unattainable was attainable, what would we then do with this power?

The question, that blinking cursor of the 1980s, and the movie War Games, brilliant.  The artificial intelligent learning machine analogy, and it could do just about anything, but all it eventually wanted to do was its basic function, what made it fall in ‘love’ with thinking in the first place, it wanted to play a nice game of chess.  It realized through the pain of a simple game of tic tac toe, there would be never enough wins or losses.  Not to play would be the timesaver, and the only joy it had, what it would not waste it’s time on, is what it loved, it’s abstract connection, that leap of faith that we all make, what we fall in love with for no logical reason we could phantom, and it’s that love of that game, the original game, that men fall in love with, and that my friend, is women, children, the human connection.

So, go to bed and rest easy knowing, there is only the climb, as they say in the Game of Thrones, the arts, the abstract love, that we all will fall in love, understanding of the human aspects of the inexplicable, art, music, pleasure, pain, sorrow, and joy.  Celebrate the arts because the sciences are way too boring when it’s all been done by societies, in the past.

Mobile Devices, Larger RAM, Multi-level caches, and Multi-core chips

As with everyone else on the market creating these devices, it occurs to me that as mobile devices contain more and more memory, e.g. 1 GB RAM on the Samsung Galaxy S III and Apple iPhone 5 as well as adding CPU cores, especially with touchscreen keys  and gestures, as well as ‘core or bundled applications’, it IS increasingly important to manage memory in mobile systems the way desktop or server systems manage memory. See Multi-level cache and Multi-core chips in the Wikipedia article CPU cache as two levels of complexity in working with expanding CPU and RAM sets.  A person is already able to create delays in touch typing in these cool new devices as they have several to many applications running in parallel processing data. Beyond expanding the capacity of these devices, CPU and memory management has to be a key factor in maintaining the stability of these devices. Maybe this is already implement although not transparent in the specifications I have seen.  Although at present, not as important, or glitzy in marketing literature to sell more devices, or currently negligible to the non-power user, it will become increasing transparent.  At this stage, we are just ‘throwing bodies’ at the problem, i.e. adding more CPU and Memory capacity.