[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


5    Virtual Memory


[Return to Library] [Contents] [Previous Chapter] [Next Section] [Next Chapter] [Index] [Help]


5.1    Overview

The virtual memory (VM) subsystem was completely rewritten by Digital in V2.0 in order to improve upon the Mach design adopted by the OSF. Specifically, Digital added the following functionality to improve performance and maintainability (V4.0 added several enhancements to these areas):

The following sections discuss these improvements to VM.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.2    Lazy Allocation Policy

In lazy allocation, swap space is reserved dynamically as the system needs to reclaim physical memory instead of having to allocate it in advance for every page of anonymous memory (that is, memory devoted to the stack and heap of a process, and to data that is not file-backed).

However, when the list of active pages falls below the preconfigured limit and the OSF memory manager attempts to reclaim pages for the free list by paging out virtual pages, if the available swap space has already been exhausted, the OSF memory manager does not back off of the page-out and instead simply discards the page. Thus, when the process whose page has been discarded takes a page fault and attempts to reactivate the missing page, unpredictable behavior results, including system hangs, panics, and at times data corruption.

To correct this problem, Digital reworked the page-out algorithm to ensure that pages are not lost if the memory manager is unable to allocate swap space for a virtual page.

As swap space decreases, the Digital UNIX Version 4.0 memory manager logs warning messages at the console until finally, if the memory manager is unable to allocate swap space for a page-out, it selects the oldest idle process and kills it, thereby freeing up swap space and returning virtual pages to the free list.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.3    Eager Reservation Policy

Unlike lazy allocation, the eager reservation policy reserves a page of swap space for every page of anonymous memory that is allocated. Although this policy is expensive in terms of reserved disk space, it eliminates the chance that the memory manager will have to kill a process to reclaim virtual pages and free up swap space.

The eager reservation policy is set by default, although either the lazy or eager policy can be configured. For more information, see the System Tuning and Performance Management guide


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.4    Unified Buffer Cache

To increase file system performance, Digital implemented a Unified Buffer Cache (UBC) fully integrated with the file system that caches file system data and can grow or shrink upon demand. Unlike the conventional Buffer Cache which is configured and allocated at boot-time and which relies on bcopy routines to move data in and out of memory, the UBC references the same physical pages as virtual memory and can use map operations rather than bcopy routines to access data, thereby increasing system performance. In addition, since the UBC contains only file system data, the Buffer Cache only needs to cache metadata, requiring only 3% of physical memory rather than the 25% required by previous versions of the operating system.

By default, the UBC can grow to consume all of physical memory so that the system can determine dynamically the percent of memory that should be allocated to the UBC. However, the maximum percent of memory that the UBC can grow to is configurable and can be set in the system configuration file by defining the ubc-maxpercent variable.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.5    Round-Robin Swapping

In an effort to improve performance, Digital changed the OSF paging algorithm to support simultaneous paging to multiple swap partitions. By contrast, OSF paging is to one swap partition at a time, waiting until the first swap partition is filled before moving to the next. As a result, since disk transfer rates are several thousand times slower than the speed of memory and the Alpha CPU, system administrators can greatly reduce this disparity in speed by spreading swap partitions among different disks and different controllers. In fact, by supporting simultaneous paging to multiple swap partitions, Digital UNIX Version 4.0 allows multiple tasks to take simultaneous page faults, thereby further increasing performance.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.6    Page In and Page Out Clustering

Whereas OSF/1 supports a paging algorithm that moves pages in and out one at a time, to improve performance, Digital UNIX adds support for page in and page out clustering. Although in most cases, it is more efficient to do multiple multipage DMA operations rather than multiple single page DMA operations, this is a configurable option.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.7    Memory-Mapped Device Interface

Rather than mapping the I/O space into memory, the memory-mapped device interface points to a data structure that defines the I/O space. As a result, large quantities of kernel virtual address space are saved as well as physical memory in general.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.8    Mach mmap MAP_PRIVATE Semantics and System V Release 4.0

Since the OSF departed from the Sun Microsystems and System V Release 4.0 mmap semantics, Digital rewrote mmap so that Sun and System V applications could compile and run on Digital UNIX Version 4.0.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.9    Secure Shared Memory Segments

Using the same kind of permission bits that the file system employs, shared memory segments can be read and write protected to prevent unwanted access.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.10    Shared Text Segments

Shared text segments allow multiple processes to share the same page tables that map shared text. All processes that share the same text segment benefit from one process taking a page fault, because less memory is needed and the performance of fork is improved.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.11    Page Coloring

The Alpha EV4 CPU contains a direct mapped physical OFF chip secondary cache, which is organized so that if the secondary cache size is N pages, then every Nth page of the physical pages of memory hashes into the same page. Digital UNIX VM manages the physical pages of memory in such a way that, if an entire resident working set of a process can fit into the secondary cache, VM places it there. As a result, because VM strives to ensure that a process's entire working set is always in the secondary cache, the number of physical memory accesses is greatly reduced as a process executes.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.12    Caches

The Alpha EV5 CPU optionally contains 3 levels of caches: internal, secondary, and tertiary caches. Digital UNIX manages physical memory in such a way that the most active subset of a process working set will remain in the fastest cache.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.13    Kernel Memory Allocator

A new kernel memory allocator (kernel malloc) was added to Digital UNIX to use kernel-wired memory more efficiently. All calls to the mbuf allocator are now mapped to the new kernel memory allocator. In addition, several components of the I/O subsystems can use the kernel memory allocator directly, rather than having to manage memory on their own. As a result, we save the amount of memory these allocators were reserving. In addition, the new allocator handles allocation under interrupt context better than the kalloc allocator and has a garbage collection thread to free memory.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.14    External Pager

Although the OSF provides guidelines for writing an external pager, these guidelines are (at best) provisional. Digital believes that the complexity and efficiency of the Digital UNIX Version 4.0 memory management system makes it impractical at this time to provide a sensible interface for an external pager.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Section] [Next Chapter] [Index] [Help]


5.15    Improved Memory Reclamation Policy

The memory reclamation policy was enhanced in Digital UNIX Version 4.0 to improve system performance. In previous releases of the operating system, once the system began running out of physical memory, global paging would begin to reclaim memory, attempting to select pages fairly between the Unified Buffer Cache (UBC) and VM. However, as the system runs and files are opened and closed, a large percentage of memory in the UBC is always referencing old, closed files--owing to its file caching algorithms. As a result, attempting to select some pages from VM and some pages from the UBC is actually unnecessary initially, since theoretically only 10% of the pages in the UBC are dirty, whereas almost all the pages in VM are dirty (in actual practice, however, the amount of dirty pages in the UBC is much smaller than 10% since most of the pages in the UBC are not in use). Because the UBC, unlike VM, contains so few dirty pages, it is much more efficient to reclaim pages from the UBC first, down to some configurable level, than it is to begin global paging immediately.

In Digital UNIX Version 4.0, the page reclamation policy was further enhanced to take advantage of the many unreferenced pages that are in both VM and UBC. When a system begins to exhaust its physical memory, memory is first reclaimed from the UBC down to a threshold defined by the parameter ubc-borrowpercent. This parameter is set by default to be 10% of the memory in the UBC and is configurable. If, after all unused memory is reclaimed from the UBC and the system still requires more physical memory, global paging is then invoked.

In effect, this policy can double the load that can be placed on a system before demands for memory begin to noticeably degrade performance.


[Return to Library] [Contents] [Previous Chapter] [Previous Section] [Next Chapter] [Index] [Help]


5.16    Rewrote Swap Allocation Mechanism

In Digital UNIX Version 4.0, the swap allocation mechanism was restructured to further reduce the fragmentation of swap space which could occur over time. Swap space is now allocated and deallocated in contiguous units, so that seek time is greatly reduced, thereby greatly improving performance.

For information on how to tune and configure the virtual memory subsystem, see the System Tuning and Performance Management guide and the System Administration guide .