Discuss briefly about demand paging in memory management scheme

In this post we are discussing about Discuss briefly about demand paging in memory management scheme

Demand paging is a memory management scheme used in modern computer operating systems to optimize the utilization of physical memory (RAM) and efficiently manage processes’ memory requirements. In demand paging, the operating system loads only the necessary portions of a program or process into physical memory, on-demand, as they are needed, rather than loading the entire program at once. Here are the key aspects of demand paging:

  • Lazy Loading: In demand paging, the OS employs a lazy loading strategy. When a program is launched, only a small portion of it, typically the initial code and data needed for startup, is loaded into memory. The rest of the program’s code and data are loaded from secondary storage (usually a hard disk) as they are accessed.
  • Page: The memory is divided into fixed-size blocks called “pages.” Similarly, the program or process’s code and data are divided into fixed-size blocks called “pages” or “segments.” When a process requests access to a specific page that is not currently in physical memory, a page fault occurs.
  • Page Fault: A page fault is an exception that indicates the requested page is not in physical memory and must be loaded from secondary storage. When a page fault occurs, the OS handles it by swapping out a page that is less likely to be used soon (e.g., using a page replacement algorithm) to make space for the required page.
handling a page fault.

The procedure for handling this page fault is straightforward

  1. We check an internal table (usually kept with the process control block) for this process to determine whether the reference was a valid or an invalid memory access.
  2. If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that page, we now page it in.
  3. We find a free fram.e (by taking one from the free-frame list, for example).
  4. We schedule a disk operation to read the desired page into the newly allocated frame.
  5. When the disk read is complete, \ve modify the internal table kept with the process and the page table to indicate that the page is now in memory.
  6. We restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory.
  • Page Replacement: Page replacement algorithms (e.g., Least Recently Used, FIFO, or Second Chance) determine which page to evict from physical memory when a page fault occurs. The goal is to minimize page faults and optimize memory usage.
  • Copy-On-Write: To optimize memory usage, many operating systems implement copy-on-write mechanisms. When a process creates a child process, the child initially shares the same memory pages with the parent. If either process attempts to modify a shared page, a copy is created for the process making the modification.
  • Backing Store (Swap Space): The OS uses secondary storage (often referred to as swap space or a backing store) to store pages that are not currently in physical memory. Swap space is typically a portion of the hard disk reserved for this purpose.
  • Page Table: The OS maintains a page table for each process, which maps virtual addresses to physical addresses. When a page fault occurs, the page table is consulted to determine the location of the required page on secondary storage.
Page table when some pages are not in main memory

Leave a Reply

Your email address will not be published. Required fields are marked *