is a game-changer in memory management. It creates the illusion of a vast address space for processes, allowing programs to use more memory than physically available. This clever trick uses secondary storage as an extension of main memory.

is the secret sauce behind virtual memory's magic. It divides memory into fixed-size blocks called pages, enabling non-contiguous allocation and efficient memory sharing between processes. This system also supports fine-grained memory protection and clever techniques like copy-on-write.

Virtual memory and memory management

Abstraction and Illusion of Large Address Space

Top images from around the web for Abstraction and Illusion of Large Address Space
Top images from around the web for Abstraction and Illusion of Large Address Space
  • Virtual memory creates an abstraction of physical memory providing processes with the illusion of a large, contiguous address space
  • Programs utilize more memory than physically available by using secondary storage (hard drives, SSDs) as an extension of main memory
  • Enables efficient memory allocation and deallocation leading to better memory utilization and process isolation
  • Manages mapping between virtual addresses (used by processes) and physical addresses (in main memory)

Memory Protection and Demand Paging

  • Prevents processes from accessing memory outside their allocated space enhancing system security
  • Supports implementation of where only required portions of a program are loaded into main memory
    • Example: Large application loads only currently used modules (text editor loads spelling check only when needed)
    • Example: Operating system loads device drivers on-demand rather than at boot time

Paging and its benefits

Paging Mechanism and Memory Management

  • Divides both physical and virtual memory into fixed-size blocks called pages
  • Virtual address space split into virtual pages while physical memory divided into page frames of the same size
  • Eliminates external fragmentation by allowing non-contiguous allocation of physical memory
  • Supports efficient memory allocation and deallocation managing only whole pages
  • Facilitates sharing of memory between processes allowing multiple virtual pages to map to the same physical page frame
    • Example: Shared libraries (libc) mapped to same physical pages for multiple processes
    • Example: Copy-on-write for efficient process forking (child process shares parent's pages until modification)

Memory Protection and Efficiency

  • Enables fine-grained memory protection by setting access rights at the page level
    • Example: Read-only pages for code segments, read-write for data segments
    • Example: No-execute (NX) bit to prevent code execution from data pages
  • Supports implementation of copy-on-write techniques improving memory efficiency for process forking
    • Initially shares all pages between parent and child processes
    • Creates separate copy of a page only when one process attempts to modify it

Address translation with page tables

Page Table Structure and Address Translation Process

  • Page tables store mapping between virtual page numbers (VPNs) and physical page frame numbers (PFNs)
  • Virtual address typically divided into a virtual page number (VPN) and a page offset
  • VPN used as index into to retrieve corresponding physical page frame number (PFN)
  • Page offset combined with PFN to form complete
    • Example: 32-bit virtual address with 4KB pages
      • Upper 20 bits form VPN, lower 12 bits form offset
      • Page table entry contains 20-bit PFN
      • Final physical address combines 20-bit PFN with 12-bit offset

Translation Lookaside Buffers and Page Table Entries

  • Translation Lookaside Buffers (TLBs) cache recent address translations improving performance
    • Hardware cache storing recently used page table entries
    • Reduces number of memory accesses required for address translation
  • Page table entries often include additional metadata such as valid bits, dirty bits, and access rights
    • Valid bit indicates if page is currently in memory
    • Dirty bit shows if page has been modified since loaded
    • Access rights specify read, write, execute permissions for the page

Performance of page size vs page table structure

Page Size Considerations

  • Larger page sizes reduce number of entries in page table decreasing memory overhead and TLB misses
    • Example: 4KB pages vs 2MB pages in x86-64 systems
    • Fewer TLB entries needed to cover same amount of memory
  • Smaller page sizes provide finer-grained memory allocation and reduce internal fragmentation
    • Example: 4KB pages waste less space for small allocations compared to 2MB pages
  • Page size affects granularity of data transfer between main memory and secondary storage during paging operations
    • Larger pages may lead to unnecessary data transfer
    • Smaller pages increase number of I/O operations

Page Table Structures and Their Impact

  • Multi-level page tables reduce memory consumption for sparse address spaces but increase number of memory accesses for translation
    • Example: x86-64 uses 4-level page tables
    • Allows efficient representation of large, sparsely used address spaces
  • Inverted page tables save memory in systems with large virtual address spaces but may increase lookup time
    • Hash table-like structure indexed by physical frame number
    • Requires search operation to find matching virtual address
  • Choice of page table structure impacts time and space complexity of address translation operations
    • Tree-based structures (multi-level) vs hash-based structures (inverted)
  • Hardware support such as dedicated MMU circuits significantly improves address translation performance
    • Example: TLB implemented in hardware for fast translation
    • Example: Page walk accelerators in modern CPUs

Key Terms to Review (19)

Demand paging: Demand paging is a memory management scheme that loads pages into memory only when they are needed, rather than loading the entire program at once. This technique allows systems to use memory more efficiently by minimizing the amount of memory required for running processes and improving overall system performance. By delaying the loading of pages until they are accessed, demand paging enhances the use of virtual memory and works alongside paging techniques to optimize resource usage.
First-in-first-out (fifo): First-in-first-out (FIFO) is a method used to manage data structures or memory allocation where the first element added is the first one to be removed. This principle mirrors real-life scenarios like a queue, where people are served in the order they arrive. FIFO is vital for ensuring that processes are handled in a systematic way, particularly in memory management, page replacement strategies, and efficient virtual memory systems.
Frame allocation: Frame allocation refers to the process of assigning physical memory frames to virtual pages in a system that uses paging as a memory management scheme. This allocation is crucial for implementing virtual memory, as it determines how many frames a process can use, and directly impacts performance and efficiency when a program accesses memory. Proper frame allocation helps manage limited physical memory resources while ensuring that processes can run smoothly by utilizing page replacement algorithms to handle situations when memory is full.
Hit Ratio: Hit ratio is a performance metric used to evaluate the efficiency of memory systems, particularly in the context of caching and virtual memory. It represents the proportion of memory access requests that are successfully fulfilled from the cache or main memory as opposed to having to fetch data from slower storage. A high hit ratio indicates an efficient memory usage, while a low hit ratio suggests that the system is struggling to keep frequently accessed data readily available, which can lead to increased latency and reduced performance.
Least Recently Used (LRU): Least Recently Used (LRU) is a cache replacement policy that removes the least recently accessed item when space is needed for new data. This approach is based on the assumption that data which has been used recently will likely be used again soon, while data that hasn’t been accessed for a while is less likely to be needed. LRU is widely implemented in various systems, playing a critical role in managing memory efficiently, optimizing cache usage, and enhancing performance.
Logical address: A logical address is a reference to a memory location independent of the current assignment of data to physical memory. It allows programs to access memory locations through a set of addresses that the operating system maps to actual physical addresses in RAM. This concept is crucial for managing memory efficiently and enabling features like virtual memory and paging, as it provides an abstraction layer between the program and the hardware.
Memory fragmentation: Memory fragmentation is a phenomenon that occurs when free memory is split into small, non-contiguous blocks over time, making it difficult to allocate larger contiguous memory blocks. This can lead to inefficient use of memory and potentially result in allocation failures even when there is enough total free memory available. It often arises from various memory allocation techniques and impacts the effectiveness of virtual memory systems, especially in managing paging.
Memory Management Unit (MMU): The Memory Management Unit (MMU) is a crucial hardware component in a computer that handles the mapping of virtual addresses to physical addresses. It plays a key role in managing memory, enabling the system to utilize virtual memory effectively and facilitate paging. By translating addresses and managing access permissions, the MMU supports efficient memory allocation and is essential for implementing page replacement algorithms.
Page Fault: A page fault occurs when a program tries to access a block of memory that is not currently loaded in the main memory (RAM), requiring the operating system to fetch the required data from secondary storage, usually a hard drive. This process is critical for virtual memory management, as it enables efficient use of memory and allows more processes to run simultaneously than would fit in physical memory alone. When a page fault happens, the operating system must decide which page to replace if the memory is full, leading into the realm of page replacement algorithms.
Page replacement algorithm: A page replacement algorithm is a method used by operating systems to manage memory in systems that utilize virtual memory. When a program needs to access data that is not currently in physical memory, the operating system must decide which page to remove from memory to make room for the new page. This decision is crucial because it affects system performance and resource utilization.
Page Table: A page table is a data structure used in computer operating systems to manage virtual memory. It maps virtual addresses to physical addresses, allowing the system to track which pages are currently in memory and where they are stored in physical RAM. The page table is essential for implementing virtual memory and paging, as it helps the system efficiently manage memory allocation and retrieval.
Paging: Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory, allowing processes to be broken into fixed-size blocks called pages. This technique simplifies memory allocation and increases flexibility, enabling an operating system to efficiently utilize physical memory by loading pages from secondary storage as needed, which is crucial for effective memory hierarchy, allocation strategies, virtual memory management, and free space management.
Physical address: A physical address is the actual location in computer memory where data or instructions reside, represented as a specific numerical value that corresponds to a byte in RAM. This address is crucial for the operating system and hardware to access the correct data when executing programs. In systems using virtual memory, the relationship between physical addresses and virtual addresses becomes important for memory management and data retrieval.
Segmentation: Segmentation is a memory management technique that divides the memory into variable-sized segments based on the logical divisions of a program, such as functions, arrays, or objects. This method enhances the flexibility of memory allocation and access, allowing programs to be loaded into different memory locations without needing contiguous space. It also relates to how an operating system structures its components and manages memory hierarchy, allocation techniques, and virtual memory through paging.
Swapping: Swapping is a memory management technique used in operating systems to temporarily move inactive processes from main memory to disk storage, allowing for more efficient use of RAM. This process enables the system to free up memory for active processes while still maintaining the ability to resume the swapped-out processes when needed. Swapping plays a crucial role in implementing virtual memory, allowing systems to run larger applications or multiple applications simultaneously, even if the total memory requirement exceeds the available physical memory.
Thrashing: Thrashing occurs when a system spends more time swapping pages in and out of memory than executing actual processes, leading to significant performance degradation. This situation arises primarily when there is insufficient physical memory available, causing excessive paging and resource contention. As a result, the system becomes inefficient, resulting in longer wait times and reduced throughput, ultimately hindering effective resource allocation and scheduling.
Tlb - translation lookaside buffer: The translation lookaside buffer (TLB) is a cache used to improve the speed of virtual address translation in a computer's memory management system. It stores recent translations of virtual memory addresses to physical addresses, allowing the system to quickly access data without having to repeatedly consult the page table, which can be slower. The TLB plays a crucial role in enhancing the performance of virtual memory and paging by minimizing latency in address resolution.
Virtual Memory: Virtual memory is a memory management capability that allows an operating system to use hardware and software to compensate for physical memory shortages by temporarily transferring data from random access memory (RAM) to disk storage. This process enables a system to run larger applications than what the physical memory can accommodate, enhancing multitasking and overall performance.
Working Set: The working set is a concept in operating systems that represents the set of pages in memory that a process is currently using or will need in the near future. It helps manage memory efficiently by determining which pages should be kept in physical memory to minimize page faults and ensure optimal performance during program execution.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.