🖲️Operating Systems Unit 3 – Memory Management

Memory management is a crucial aspect of operating systems, optimizing system performance and resource utilization. It involves allocating and deallocating memory, managing virtual memory, and ensuring memory protection. These processes are essential for efficient multitasking and system stability. Memory management techniques include static and dynamic allocation, paging, and segmentation. Virtual memory allows processes to use more memory than physically available, while protection strategies prevent unauthorized access. Advanced concepts like copy-on-write and memory compression further enhance system efficiency and performance.

What's Memory Management?

  • Involves managing computer memory resources to optimize system performance and ensure efficient utilization
  • Includes allocating memory to processes, deallocating memory when no longer needed, and managing memory hierarchy
  • Handles memory requests from processes and decides how to allocate available memory among them
  • Tracks which parts of memory are currently in use and which are free to be allocated
  • Manages virtual memory, allowing processes to use more memory than physically available by swapping data between RAM and disk
  • Ensures memory protection by preventing unauthorized access to memory and isolating process memory spaces
  • Aims to minimize memory fragmentation, which occurs when there are many small, non-contiguous free memory blocks
  • Plays a crucial role in the overall performance and stability of an operating system

Types of Memory

  • RAM (Random Access Memory) provides fast, temporary storage for active processes and data
    • Volatile memory loses its contents when power is turned off
    • Divided into physical frames, each typically 4KB in size
  • ROM (Read-Only Memory) stores permanent, non-volatile data such as firmware and boot instructions
  • Cache memory is a high-speed memory located close to the CPU, used to store frequently accessed data and instructions
    • Levels of cache: L1 (fastest, smallest), L2, and L3 (largest, slowest)
  • Registers are small, high-speed memory units within the CPU used for temporary storage during instruction execution
  • Virtual memory is a technique that allows processes to use more memory than physically available by swapping data between RAM and disk
  • Shared memory is a memory region that can be accessed by multiple processes for inter-process communication
  • Non-volatile memory retains data even when power is turned off (ROM, flash memory, hard disks)

Memory Allocation Techniques

  • Static memory allocation assigns memory to processes at compile-time, before the program execution begins
    • Memory size is fixed and cannot be changed during runtime
    • Suitable for systems with predictable memory requirements
  • Dynamic memory allocation assigns memory to processes during runtime, as and when requested
    • Memory is allocated from a heap, a large pool of free memory
    • Allows flexible memory usage and efficient utilization of available memory
  • Contiguous memory allocation assigns a single, contiguous block of memory to each process
    • Simplifies memory management but can lead to fragmentation
  • Non-contiguous memory allocation allows a process to be allocated memory in non-contiguous blocks
    • Reduces fragmentation but requires more complex memory management
  • Buddy system is a memory allocation technique that divides memory into fixed-size blocks (powers of 2) and allocates memory by splitting larger blocks into smaller ones
  • Slab allocation is a technique used in kernel memory allocation to efficiently manage memory for frequently allocated objects
  • Stack allocation is used for local variables and function call frames, with memory allocated and deallocated in a last-in-first-out (LIFO) order

Virtual Memory Explained

  • Virtual memory is a memory management technique that allows processes to use more memory than physically available in RAM
  • Provides a virtual address space to each process, which is mapped to physical memory by the operating system
  • Enables the execution of processes that require more memory than available RAM by swapping data between RAM and disk
  • Divided into fixed-size pages (typically 4KB) that are mapped to physical memory frames
  • Address translation hardware (MMU) translates virtual addresses to physical addresses using page tables
  • Page tables store the mapping between virtual pages and physical frames
  • When a process accesses a virtual address not present in RAM, a page fault occurs, and the operating system loads the required page from disk into memory
  • Demand paging loads pages into memory only when they are accessed, reducing memory usage and startup time
  • Swapping involves moving entire processes between RAM and disk when memory is scarce
  • Thrashing occurs when a system spends more time swapping pages than executing useful work due to insufficient RAM

Paging and Segmentation

  • Paging is a memory management scheme that divides virtual memory into fixed-size pages and physical memory into frames
    • Each process has its own page table, which maps virtual pages to physical frames
    • Simplifies memory allocation and enables efficient utilization of memory
  • Segmentation is a memory management scheme that divides a process's virtual address space into variable-size segments
    • Each segment represents a logical unit of the program, such as code, data, or stack
    • Provides a more flexible and logical view of memory compared to paging
  • Segmentation with paging combines the advantages of both techniques
    • Virtual address space is divided into segments, which are further divided into pages
    • Provides both logical separation and efficient memory management
  • Hierarchical paging uses multiple levels of page tables to reduce the size of page tables and improve memory efficiency
  • Inverted page tables store the mapping of physical frames to virtual pages, reducing the memory overhead of traditional page tables
  • Translation Lookaside Buffer (TLB) is a hardware cache that stores recently used page table entries to speed up address translation

Memory Protection Strategies

  • Memory protection is essential to ensure the integrity and security of a system by preventing unauthorized access to memory
  • Process isolation ensures that each process has its own private memory space and cannot access memory belonging to other processes
    • Achieved through virtual memory and address space separation
  • Memory access control restricts access to memory based on the type of access (read, write, execute) and the privilege level of the accessing entity
  • Base and limit registers define the valid range of memory addresses a process can access, preventing out-of-bounds memory accesses
  • Virtual memory provides a separate virtual address space for each process, preventing direct access to physical memory
  • Memory segmentation can enforce access control by assigning different protection levels to each segment (e.g., read-only code segment)
  • Memory protection units (MPUs) are hardware components that enforce memory access control rules set by the operating system
  • Kernel memory is protected from user-mode processes to prevent unauthorized modification of critical system data and code
  • Secure memory management techniques, such as address space layout randomization (ASLR), help mitigate memory-based attacks by randomizing the location of memory regions

Common Memory Issues

  • Memory leaks occur when allocated memory is not properly deallocated, leading to a gradual depletion of available memory
    • Can cause performance degradation and eventual system failure
    • Often caused by programming errors, such as forgetting to free dynamically allocated memory
  • Memory fragmentation occurs when there are many small, non-contiguous free memory blocks, making it difficult to allocate large contiguous blocks
    • External fragmentation happens when there is enough total free memory but no single contiguous block large enough to satisfy an allocation request
    • Internal fragmentation occurs when allocated memory is larger than the actual requested size, wasting memory within the allocated block
  • Insufficient memory can cause thrashing, where the system spends more time swapping pages than executing useful work
    • Occurs when the total memory demand of running processes exceeds the available physical memory
  • Memory corruption happens when the contents of memory are unintentionally modified, leading to program crashes or undefined behavior
    • Can be caused by buffer overflows, invalid pointer operations, or accessing uninitialized memory
  • Memory-related security vulnerabilities, such as buffer overflow attacks, can allow attackers to execute arbitrary code or gain unauthorized access to the system
  • Out-of-memory (OOM) errors occur when the system is unable to allocate memory due to insufficient free memory, often leading to process termination or system instability
  • Shared memory synchronization issues can arise when multiple processes access shared memory concurrently without proper synchronization, leading to data races and inconsistencies

Advanced Memory Management Concepts

  • Copy-on-write (COW) is a technique used to optimize memory usage by allowing multiple processes to share the same memory pages initially
    • When a process attempts to modify a shared page, a private copy of the page is created for that process
    • Reduces memory usage and improves performance by deferring memory copying until necessary
  • Memory-mapped files allow files to be accessed through memory, enabling efficient file I/O and sharing of memory between processes
    • File contents are mapped to a region of the process's virtual address space
    • Modifications to the mapped memory are reflected in the underlying file
  • Kernel same-page merging (KSM) is a memory deduplication technique used in virtualized environments to reduce memory usage
    • Identifies identical memory pages across virtual machines and merges them into a single copy
    • Reduces memory footprint and improves memory utilization in virtualized systems
  • Non-uniform memory access (NUMA) is a memory architecture where memory access times depend on the memory location relative to the processor
    • Optimizing memory allocation and thread scheduling based on NUMA topology can improve performance
  • Transparent huge pages (THP) is a technique that automatically promotes memory pages to larger sizes (e.g., 2MB or 1GB) to reduce the overhead of page table management
    • Improves performance by reducing the number of page faults and TLB misses
  • Memory compression is a technique used to reduce memory usage by compressing infrequently accessed or inactive memory pages
    • Compressed pages are stored in memory and decompressed when accessed
    • Allows more data to be stored in memory, reducing the need for swapping
  • Non-volatile memory (NVM) technologies, such as Intel Optane DC persistent memory, blur the line between memory and storage
    • Provide high-capacity, persistent memory that can be accessed using load/store instructions
    • Enable new memory management techniques and data persistence models


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.