Cache mapping refers to a technique using which the content present in the main memory is brought into the memory of the cache. Three distinct types of mapping are used for cache memory mapping.
In this article, we will take a look at the Cache Mapping according to the GATE Syllabus for CSE (Computer Science Engineering). Read ahead to learn more.
Table of Contents
- What is Cache Mapping?
- Process of Cache Mapping
- Techniques of Cache Mapping
- Video on Cache Memory
- Frequently Asked Questions
What is Cache Mapping?
As we know that the cache memory bridges the mismatch of speed between the main memory and the processor. Whenever a cache hit occurs,
- The word that is required is present in the memory of the cache.
- Then the required word would be delivered from the cache memory to the CPU.
And, whenever a cache miss occurs,
- The word that is required isn’t present in the memory of the cache.
- The page consists of the required word that we need to map from the main memory.
- We can perform such a type of mapping using various different techniques of cache mapping.
Let us discuss different techniques of cache mapping in this article.
Process of Cache Mapping
The process of cache mapping helps us define how a certain block that is present in the main memory gets mapped to the memory of a cache in the case of any cache miss.
In simpler words, cache mapping refers to a technique using which we bring the main memory into the cache memory. Here is a diagram that illustrates the actual process of mapping:
Now, before we proceed ahead, it is very crucial that we note these points:
Important Note:
- The main memory gets divided into multiple partitions of equal size, known as the frames or blocks.
- The cache memory is actually divided into various partitions of the same sizes as that of the blocks, known as lines.
- The main memory block is copied simply to the cache during the process of cache mapping, and this block isn’t brought at all from the main memory.
Techniques of Cache Mapping
One can perform the process of cache mapping using these three techniques given as follows:
1. K-way Set Associative Mapping
2. Direct Mapping
3. Fully Associative Mapping
1. Direct Mapping
In the case of direct mapping, a certain block of the main memory would be able to map a cache only up to a certain line of the cache. The total line numbers of cache to which any distinct block can map are given by the following:
Cache line number = (Address of the Main Memory Block ) Modulo (Total number of lines in Cache)
For example,
- Let us consider that particular cache memory is divided into a total of ‘n’ number of lines.
- Then, the block ‘j’ of the main memory would be able to map to line number only of the cache (j mod n).
The Need for Replacement Algorithm
In the case of direct mapping,
- There is no requirement for a replacement algorithm.
- It is because the block of the main memory would be able to map to a certain line of the cache only.
- Thus, the incoming (new) block always happens to replace the block that already exists, if any, in this certain line.
Division of Physical Address
In the case of direct mapping, the division of the physical address occurs as follows:
2. Fully Associative Mapping
In the case of fully associative mapping,
- The main memory block is capable of mapping to any given line of the cache that’s available freely at that particular moment.
- It helps us make a fully associative mapping comparatively more flexible than direct mapping.
For Example
Let us consider the scenario given as follows:
Here, we can see that,
- Every single line of cache is available freely.
- Thus, any main memory block can map to a line of the cache.
- In case all the cache lines are occupied, one of the blocks that exists already needs to be replaced.
The Need for Replacement Algorithm
In the case of fully associative mapping,
- The replacement algorithm is always required.
- The replacement algorithm suggests a block that is to be replaced whenever all the cache lines happen to be occupied.
- So, replacement algorithms such as LRU Algorithm, FCFS Algorithm, etc., are employed.
Division of Physical Address
In the case of fully associative mapping, the division of the physical address occurs as follows:
3. K-way Set Associative Mapping
In the case of k-way set associative mapping,
- The grouping of the cache lines occurs into various sets where all the sets consist of k number of lines.
- Any given main memory block can map only to a particular cache set.
- However, within that very set, the block of memory can map any cache line that is freely available.
- The cache set to which a certain main memory block can map is basically given as follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets present in the Cache)
For Example
Let us consider the example given as follows of a two-way set-associative mapping:
In this case,
- k = 2 would suggest that every set consists of two cache lines.
- Since the cache consists of 6 lines, the total number of sets that are present in the cache = 6 / 2 = 3 sets.
- The block ‘j’ of the main memory is capable of mapping to the set number only (j mod 3) of the cache.
- Here, within this very set, the block ‘j’ is capable of mapping to any cache line that is freely available at that moment.
- In case all the available cache lines happen to be occupied, then one of the blocks that already exist needs to be replaced.
The Need for Replacement Algorithm
In the case of k-way set associative mapping,
- The k-way set associative mapping refers to a combination of the direct mapping as well as the fully associative mapping.
- It makes use of the fully associative mapping that exists within each set.
- Therefore, the k-way set associative mapping needs a certain type of replacement algorithm.
Division of Physical Address
In the case of fully k-way set mapping, the division of the physical address occurs as follows:
Special Cases
- In case k = 1, the k-way set associative mapping would become direct mapping. Thus,
Direct Mapping = one-way set associative mapping
- In the case of k = The total number of lines present in the cache, then the k-way set associative mapping would become fully associative mapping.
Video on Cache Memory
Frequently Asked Questions on Cache Mapping
What is cache mapping and its type?
Cache mapping refers to a technique using which the content present in the main memory is brought into the memory of the cache. Three distinct types of mapping are used for cache memory mapping: Direct, Associative and Set-Associative mapping.
Why do we need cache mapping?
The Cache Memory refers to a special, very high-speed memory that is used when we want to speed up and synchronize with a high-speed CPU. The cache holds data and instructions that are frequently requested so that they are available immediately to the CPU as and when needed. The cache memory is used to reduce the overall average time that is required to access data and information from the main memory.
What is cache in simple terms?
A cache, in simpler words, refers to a block of memory used for storing data that is most likely used again. The hard drive and CPU often make use of a cache, just like the web servers and web browsers do. Any cache is made up of numerous entries, known as a pool.
What are the 3 types of cache memory?
The three types of general cache are:
- The L1 cache, also known as the primary cache, is very fast, but it is relatively small. It is embedded usually in the processor chip in the form of the CPU cache.
- The secondary cache, also known as the L2 cache, is often comparatively more capacious than the L1 cache.
- The Level 3 (or L3) cache refers to a specialized memory that is developed in order to improve the actual performance of the L1 and L2.
Keep learning and stay tuned to get the latest updates on the GATE Exam along with Eligibility Criteria, GATE Syllabus for CSE (Computer Science Engineering), GATE CSE Notes, GATE CSE Question Paper, and more.
Also Explore,
Comments