Skip to main content

Virtual memory management in VMware: Transparent memory sharing

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is here.

In this installment, I will discuss the impact and effectiveness of transparent memory sharing, using the performance data that was gathered during a benchmark that stressed VMware's virtual memory management capabilities.


Transparent memory sharing.

Transparent memory sharing is one of the key memory management mechanisms that supports aggressive server consolidation. Dynamically, VMware detects memory pages that are identical within or across guest machine images. When identical pages are detected, VMware maps them to a single page in machine memory. When guest machines are largely idle, transparent memory sharing enables VM to pack guest machine images efficiently into a single hardware platform, especially for machines running the same OS, the same OS version, and the same applications. However, when guest machines are active, the benefits of transparent memory sharing are evidently greatly reduced, as will soon be apparent.

VMware uses a background thread that scans guest machine pages continuously, looking for duplicates. This process is illustrated in Figure 5. Candidates for memory sharing are found by calculating a hash value from the contents of the page and looking for a collision in a Hash Table that is built from the known hash values from other current pages. If a collision is found, then the candidate for sharing is compared to the base page byte by byte. If the contents of the candidate and the base pages match, then VMware points the PTE of the copy to the same page of machine memory backing the base page.

Memory sharing is provisional. VMware uses a Copy on Write mechanism whenever a shared page is modified and can no longer be shared. This is accomplished by flagging the shared page PTE as Read Only. Then, when an instruction executes that attempts to store data in the page, the hardware generates an addressing exception. VMware handles the exception by creating a duplicate, and re-executing the Store instruction that failed against the duplicate.
Transparent memory sharing has great potential benefits, but there is some overhead necessary to support the feature. One source of overhead is the processing by the background thread. There are tuning parameters to control the rate at which these background memory scans run, but, unfortunately, there are no associated performance counters reported that would help the system administrator to adjust these parameters. The other source of overhead results from the Copy on Write mechanism, which entails the handling of additional hardware interrupts associated with the soft page faults. There is no metric that provides the rate these additional soft page faults occur either.


Figure 5. Transparent memory sharing uses a background thread to scan memory pages, compute a hash code value from their contents, and compare to other Hash codes that have already been computed. In the case of a collision, the contents of the page that is a candidate for sharing are compared byte by byte to the collided page. If the pages contain identical content, VMware points both pages to same physical memory location.

In the case study, transparent memory sharing is initially extremely effective – when the guest machines are largely idle. Figure 6 renders the Memory Shared performance counter from each of the guest machines as a stacked area chart. At 9 AM, when the guest machines are still idle, almost all the 8 GB granted to three of the machines (ESXAS12C, ESXAS12D, and ESXAS12E) is being shared by pointing those pages to the machine memory pages that are assigned to the 4th guest machine (ESXAS12B). Together, these three guest machines have about 22 GB of shared memory, which allows VMware to pack 4 x 8-GB OS images into a machine footprint of about 10-12 GB.

However, once the benchmark programs start to execute, the amount of shared memory dwindles to near zero. This is an interesting result. With this workload of identically configured virtual machines, even when the benchmark programs are active, there should still be significant opportunities to share identical code pages. But VMware is apparently unable to capitalize much on this opportunity once the guest machines become active. A likely explanation for the diminished returns from memory sharing is simply that the virtual memory management performed by each of the active guest Windows machines leads to the contents of too many virtual memory pages changing too frequently, something which simply overwhelms the copy detection sharing mechanism.[1]




[1] Since the benchmark programs are also consuming CPU resources, another possible explanation for the lack of memory sharing is severe processor contention that prevents the memory scanning thread from being dispatched while the benchmark programs were active. However, the VMware Host reported overall processor utilization of only about 40-60% throughout most of the active benchmarking period, so this hypothesis was rejected. Here is where some resource accounting that can report the memory scan rate or the amount of time the scan thread was active would be quite helpful.
Figure 6. The impact of transparent memory sharing dwindles to near zero when the benchmarking workloads were active.
In the next post in this series: we will dig into VMware's use of ballooning.

Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead