Discussion Forum

Interative Forum for discussing any query literally to UGC-NET Computer Science, GATE Computer Science and Computer Sciene and Technology in general.

ugc_net image

UGC-NET Computer Science

Correspondence Courses and Test Series to prepare for UGC-NET computer science and applications

GATE image


MCQs, Lecture Notes, Ebooks for GATE preparation

freestuff image
jobs image

Jobs Newsfeed

Timely information of various Recruitments.


Computer organization Notes

Memory Hierarchy

In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. As a single type of storage is not superior in speed of access capacity & cost, most computers system makes use of hierarchy of storage technologies known as storage hierarchy or memory hierarchy.
Each level of the hierarchy is of higher speed and lower latency, and is of smaller size, than lower levels. Memory hierarchies work because well-written programs tend to access the storage at any particular level more frequently than they access the storage at the next lower level. So the storage at the next level can be slower, and thus larger and cheaper per bit. The overall effect is a large pool of memory that costs as much as the cheap storage near the bottom of the hierarchy, but that serves data to programs at the rate of the fast storage near the top of the hierarchy.

The memory hierarchy in most computers is as follows:
* Processor registers – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size
* Level 1 (L1) cache – often accessed in just a few cycles, usually tens of kilobytes
* Level 2 (L2) cache – higher latency than L1 by 2× to 10×, often 512KB or more
* Level 3 (L3) cache – (optional) higher latency than L2, often multiple MB's
* Main memory (DRAM) – may take hundreds of cycles, but can be multiple gigabytes
* Disk storage – hundreds of thousands of cycles latency, but very large.
Memory Hierarchy is a pyramid that includes registers, cache, main, secondary and offline memories. As we move up the pyramid we encounter storage elements which have faster access time, high cost per bit and less capacity. Hence registers has fastest access time. Cache memory falls next to storage hierarchy. It is used to increase the speed of processing by making current programs & data available to the CPU at rapid rate. Next is main memory which is used to store currently executing programs. After this is secondary storage devices like magnetic tape, magnetic disk, offline storage devices like compact disks falls at bottom of hierarchy. Memory hierarchy is the cost efficient way of designing computer system with very large storage capacities.
As one goes down the hierarchy, the following occurs
(i) Increasing access time.
(ii) Increasing capacity
(iii) Decreasing cost per bit
(iv) Decreasing frequency of access of the memory by the CPU.
Thus smaller, more expensive, faster memories are supplemented by larger, cheaper and slower memories. The overall goal of hierarchy is to obtain highest possible average access speed while minimizing the total cost of entire memory system.

You can obtain Printed Copies of this material by making a request at with a nominal print charges.


Return To Computer organization Topics