How Main Memory Maps to Cache Memory

Cache memory is much smaller than main memory, so not all memory content can be stored in cache at once. Therefore, a specific mapping technique is used to decide where in cache each block of main memory will be stored.

One common technique is called:


Direct-Mapped Cache

In direct-mapped cache, each block of main memory maps to exactly one location (one cache line) in cache memory.


Mapping Process – Step by Step

Let’s say the CPU wants to access an address in main memory (e.g., 0x000F_0824). The cache controller splits this memory address into three parts:

  1. Tag Field – identifies which block of memory is stored in cache
  2. Set Index Field – tells which line in cache this memory block should go
  3. Data Index Field – points to specific word/byte inside the cache line

Example: Address 0x824

Suppose address 0x824 is being accessed.

  • The Set Index chooses which line in cache it will go to (say line 68).
  • The Data Index tells which word in the cache line is used.
  • The Tag is checked against the stored tag in that line to ensure it’s the correct data.

If the tag matches, and the valid bit is set ➤ Cache Hit
If not, it’s a ➤ Cache Miss, and data must be fetched from main memory into the cache.


What Happens on Cache Miss?

  • Entire cache line from main memory is loaded into the cache.
  • This is called a cache line fill.
  • If the existing line had valid but unrelated data ➤ it gets evicted.
  • This removal is called cache eviction.

Eviction = Remove current data from cache to make space for new memory block.


Data Streaming

While loading the full cache line during a cache miss, the processor may start executing with the first word fetched, even before the entire line is filled. This is called data streaming, which improves performance.


Thrashing

Thrashing happens when two or more blocks of main memory map to the same cache line and keep evicting each other repeatedly.


Example: Routine A and B Mapping to Same Line

  • Suppose Routine A and Routine B are at memory addresses that map to same cache set index.
  • In a loop, if:
    • First A is called → goes into cache
    • Then B is called → evicts A
    • Next loop: A is called again → evicts B
    • Repeat…

This repeated loading and eviction is called thrashing. It results in:

  • Poor performance
  • More cache misses
  • Less benefit from cache

Thrashing = Two blocks fighting for the same cache space, causing repeated eviction.

Leave a Reply

Your email address will not be published. Required fields are marked *