Increasing RAM on an Android OS to Limitless Computing Capacity

As I was implying in other posts, it is possible, with a potential infinite capacity to expand the computing power of a Google Android device exponentially without potential limitations.  As I explored why all the devices produced by Android seemed to grow in CPU, but not in RAM it seemed to be implied that the Android model was progressing toward a cloud model, the computations on the device would occur using an Elastic Compute Cloud, Amazon, EC2, and now Google is expanding into that arena. 

The other spectrum, Apple’s iPhone, has a business model, where it was clear that storage was their cloud model, no indications of cloud computations.  In fact, initially, there was no road map for cloud computing, philosophically, as the initial pricing model was indicating, although that might have had to change to compete, but wouldn’t admit it.

There have been several posts which specify how one would be able to hack the Android Operating System, and add RAM using the extendable, on board microSD.  The initial strategy to partition on board memory, such as leveraging the Secure Digital memory is the first step to increase your devices computing capacity.  The secondary evolving step is to use cloud elastic computing, especially in HotSpots, or home WiFi, when accessible, to utilize and expand your devices capacity to run applications at a high performing capacity.

There are opportunities to increase HotSpots through public access points, which will be hard, maybe impossible, for retail to compete with the free expansion of public accessible HotSpots.  Municipalities may decide to allow tax payers of a particular community to enter in a code, and as a result of residency of a local community, have access to the municipalities’ HotSpots.  It justifies the expansion, expenditure, and increase in revenues and local taxes for the municipalities.   Municipalities may even allow the local taxpayer to have a certain number of guest accounts.  Additional accounts may be charged a discounted fee for transient visitors to the towns, such as local shoppers that patronage local shops.  The question is, would expansion of municipality public access WiFi  offset the retail WiFi income potential for shops?  It seems that many shops are offering Free WiFi, or partnering with external national or regional providers of WiFi.  Municipality WiFi may use these 3rd party vendors to build up their infrastructure, and offer this plan.

The ability to scale up your device for both performance and storage is the sweet spot, which may entice retail shoppers to shop in a community, bring in additional revenues to a municipality.  In addition, local municipalities may offer tax breaks to registered WiFi secure HotSpots, which enable local shoppers to go through a municipality portal, and utilize the WiFi access.  The common proxy portal will enable users to register a code, or pay for the local access, just as hotels today perform the same service.  Revenue for the municipality would come both from the WiFi access, and retail revenue, i.e. taxes.

The important part of cloud computing, regardless of storage or real-time computations, must be able to encrypt the storage so the storage company doesn’t have the encryption access to the contents, but also the processing of information (CPU/RAM) in real time, or, Just In Time (JIT) encryption in the cloud.  People need to be able to trust the containment and the processing of their information within the cloud, and this is one way to be able to do so.  If each device has a mechanism, just like the one already in place by Google, and other firms,  they define and pass a Client ID,  and Client Secret to exercise there API for applications.  The one challenge to this is providing the company, which contains your information, the keys to your kingdom, a trusted party.

An alternate approach might be to allow an independent authority to control the keys, just like the original structure of the internet, where a single source controls the maintenance and control of domain (e.g. name.com) allocation.  The authority which manages domain names under a hierarchy is headed by the Internet Assigned Numbers Authority (IANA). They manage the top of the tree by administrating the data in the root nameservers.   Many time governments administrate the authority, others delegate the authority, so please read, the Domain Name Registration article.

Memory wall

The “memory wall” is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.[5]

Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF)

“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat… Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-calledVon Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don’t address.”

 

Leave a Reply