Skip to main content

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options.

Ballooning

Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfers the decision about which pages to remove from memory to the guest machine OS, which can execute its normal page replacement policy.

Ballooning was pioneered in VMware ESX , and the Hyper-V implementation is similar, but with some key differences. One key difference is that the Hyper-V hypervisor has no ability to ever remove guest physical memory arbitrarily and swap it to a disk file, as VMware ESX does when it faces an acute shortage of machine memory. VMware ESX swapping selects pages at random for removal, and without any knowledge of how guest machine pages are used, the hypervisor can easily choose badly. The Microsoft Hyper-V developers chose not to implement any form of hypervisor swapping of machine memory to disk. For page replacement, Hyper-V relies solely on the virtual memory management capabilities of the guest OS, which is usually Windows, when there is a shortage of machine memory. Frankly, performance suffers under either approach when there is an extreme machine memory shortage – overloading machine memory is something to be avoided on both virtualization platforms. Hyper-V does have the virtue that machine memory management is simpler, relying on a single mechanism to relieve a machine memory shortage.

In both virtualization approaches, it is important to be able to understand the signs that the VM Host machine’s memory is over-committed. In Hyper-V, these include:

  • a shortage of Hyper-V Dynamic Memory\Available Memory
  • sustained periods where the Hyper-V Dynamic Memory\Average Memory Pressure measurements for one or more guest machines hovers near 100
  • internal guest machine measurements show high paging rates to disk (Memory\Pages/sec, Memory\Page-ins/sec)

As ballooning transfers the decision about which pages to remove from guest physical memory to the guest OS, we need to revisit virtual memory concepts briefly in this new context. One goal of virtual memory management is to utilize physical memory efficiently, essentially filling up physical memory completely, aside from a small buffer of unallocated physical pages that are kept in reserve. Memory over-commitment works because processes frequently allocate more virtual memory than they need at any one moment in time. Consequently, it is usually not necessary to back every allocated virtual memory page with guest physical memory. Consider a guest machine that reports a Memory Pressure reading of 100 – in other words, its Committed Bytes = Visible Physical Memory. Typically, 10-20% of the machine’s committed pages are likely to be relatively inactive, which would allow the OS to remove them from physical memory without much performance impact.

Because virtual memory management tends to fill up physical memory, it is not uncommon for the OS to need to displace a currently allocated virtual page from physical memory to make room for a new or non-resident page that the process has just referenced from time to time. Windows implements an LRU page replacement policy, trimming older pages from process working sets when physical memory is in short supply. Windows and Linux guest machines manage virtual memory dynamically, keeping track of which of an application’s virtual pages are currently being accessed. Furthermore, the OS’s page replacement policy ages allocated virtual memory pages that have not been referenced in the current interval. The pages of a process that have not been referenced recently are usually better candidates for removal in favor of current pages.

The ballooning technique used in Hyper-V – and in VMware ESX, as well –  pushes the decision about which specific pages to remove down to the guest machine, which is in a far better position to select candidate pages for removal because the guest OS does maintain memory usage data. The term “ballooning” refers to a management thread running inside the guest machine that acquires empty physical memory buffers when the hypervisor signals that it wants to remove physical memory from the partition. This action can be thought of as the memory balloon inflating. Having once inflated, when Hyper-V decides to add memory back to the child partition, it deflates the balloon, freeing up balloon memory that was previously acquired.

In Hyper-V, ballooning is initiated by the Dynamic Memory Balancer, a task hosted inside the Root partition’s Virtual Machine Management Server (VMMS) component. Whenever the Dynamic Memory Balancer decides to adjust the amount of guest physical memory allotted to a guest machine, it communicates with the specific VM worker process running in the Root partition that maintains the state of the guest machine. If the decision is to remove memory, the VM worker process issues a message to request page removal that is communicated to the child partition across the VMBus.

The memory ballooning process used to reduce the size of guest physical memory is depicted in Figure 15. Inside the child partition, the Dynamic Memory VSC – also responsible for implementing the guest OS enlightenment that reports the number of guest OS committed bytes – responds to the remove memory request by making a call to the MmAllocatePagesForMdlEx API, which acquires memory from the non-paged pool. This pool of allocated physical memory, normally used by drivers for DMA devices that need access to physical addresses, is the “balloon” that inflates when Hyper-V determines it is appropriate to remove guest physical memory from the guest machine. The Dynamic Memory VSC then returns to the Root partition – via another VMBus message – a list of the Guest Physical addresses of the balloon pages that it has just acquired. The Root partition then signals the hypervisor that these pages are available to be added to a different partition.

Figure 15. The balloon driver is a Dynamic Memory VSC that responds to a VMBus request to remove memory by acquiring memory from the non-paged pool. The balloon driver then returns a list of physical memory pages that the hypervisor can immediately grant to a different virtual machine.

Since the balloon driver in the guest machine will pin the memory balloon pages in nonpaged physical memory until further notice, the physical memory pages in the guest machine balloon prove the exception to the rule that memory locations can only be occupied by one guest machine at a time. The pages in the balloon are set aside, remaining accessible from inside the guest machine; however, the balloon driver ensures that they are not actually accessed. This allows Hyper-V to grant the machine memory these balloon pages occupy to another guest machine to use. 

From inside the guest Windows machine, the balloon inflating increases the amount of nonpaged pool memory that is allocated, as illustrated in Figure 16. Figure 16 reports on the size of the nonpaged Pool in a Windows guest during a period when the balloon inflates (shortly after 5 pm) and then deflates about an hour later. 

Figure 16. Inside the guest Windows machine, the balloon inflating corresponds to an increase in the amount of nonpaged pool memory that is allocated. In this example, the balloon deflates about 1 hour later.
As in VMware, ballooning itself has no guaranteed immediate impact on physical memory contention inside the Windows guest machine. So long as the guest machine has a sufficient supply of available pages, the impact remains minimal. Over time, however, ballooning can pin enough guest OS pages in physical memory to force the guest machine to execute its page replacement policy. In the case of Windows, this means that the OS will also issue a LowMemoryResourceNotification event, which triggers garbage collection in a .NET Framework application and a similar buffer manager trimming operation in SQL Server. On the other hand, if ballooning does not cause the guest OS machine to experience memory contention, i.e., if the balloon request can be satisfied without triggering the guest machine’s page replacement policy, there will be no visible impact inside the guest machine other than an increase in the size of the nonpaged Pool.

Next: a Dynamic Memory management case study: what happens when the machine memory on the Hyper-V Host is over-committed.

Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead