Skip to main content

Hyper-V Dynamic Memory: a Case Study


Let’s look at an example of what happens when Hyper-V machine memory is over-committed and the Dynamic Memory Balancer attempts to adjust to that condition. The scenario begins with five Windows guest machines running with the following dynamic memory settings:


Minimum
Maximum
WIN81TEST1
2 GB
6 GB
WIN81TEST2
2 GB
6 GB
WIN81TEST3
512 MB
6 GB
WIN81TEST4
512 MB
6 GB
WIN81TEST5
512 MB
8 GB

In the scenario, four of the five guest machines are doing actual work. Only guest machine 5 is active, as illustrated in the screen shot of the Hyper-V Manager console, shown in Figure 17. The Hyper-V Host used in the test contains 12 GB of RAM. The remaining guest machines are idle initially, which allows Hyper-V to reduce the amount of machine memory allocated to guests 3-4 to near their minimum values, using the ballooning technique we discussed. 

Figure 17. The Hyper-V Manager console showing the five active guest machines at the beginning of the Dynamic Memory case study.

The workload executed on guest machine 5 is a memory soaker program that is severely constrained in an 8 GB virtual machine, so its machine memory allotment quickly increases. Notice, however, that guest machines 1-2 retain all 2 GB of their machine memory allotments, which reflects their Dynamic Memory Minimum memory settings. The Physical Memory allocations for a two-hour window at the beginning of the test are shown in Figure 18 indicate that the Assigned Memory values shown in the console window are representative ones.

Figure 18. Physical Memory allocations over a two-hour window starting at the beginning of the test scenario. Test machines 1 & 2 are running at their minimum dynamic memory settings, which was 2 GB.

The Memory Pressure indicators for this initial configuration are shown in Figure 19. The Memory Pressure measurements for guest machines 1-2 are much lower than the remaining guest machines which are subject to memory adjustments by the Dynamic Memory Balancer. There is a sustained period beginning around 1:20 pm where the Memory Pressure measurements for machines 3-5, which are subject to dynamic memory adjustments, exceed the Memory Pressure threshold value of 100. Because of the minimum memory settings in effect for guest machines 1 & 2, their Memory Pressure readings are less than 40.

Figure 19. Memory Pressure measurements for machines 3-5 exceed the threshold value of 100. Because of the minimum memory settings in effect for guest machines 1 & 2, their Memory Pressure readings are less than 40.

At this point, you can work backwards from the Memory Pressure measurements and the amount of physical memory visible to the partition and calculate the number of committed bytes reported by the guest machines. Or you can also gather performance measurements from the guest Windows machines.

Figure 20 shows a view of Committed Bytes and the guest machine’s Commit Limit taken from inside guest machine 5 during the same measurement interval as Figure 19. Prior to 1:20, the guest machine Commit Limit is about 4.5 GB. Sometime around 1:20 pm, Hyper-V added machine memory to the guest machine that boosted the Commit Limit to about 5.5 GB. At that point, the guest machine started paging excessively. The VM Host was so bottlenecked due to this excessive disk paging beginning around 1:20 pm that there are gaps in the guest machine performance counter measurements that are available, indicating the guest machine was dispatched erratically.

Figure 20.  A view from inside guest machine 5 of Committed Bytes and the guest machine’s Commit Limit during a period where the guest faced a severe physical memory constraint. Note several gaps in the measurement data. These gaps reflect intervals in which performance data collection was delayed or deferred because of virtual processor dispatching delays and deferred timer interrupts due to contention for the VM Host machine’s physical processors.

To alleviate a machine memory shortage, remove or migrate one or more of the guest machines that are executing to another Hyper-V host to free up some machine memory. Around 4:30 pm, I manually shut down guest machines 1 & 2, which quickly freed up 4 GB of machine memory that were holding onto. As this machine memory became available, Hyper-V began to increase the memory allotment for guest machine 5, which remained under severe memory pressure. As shown in Figure 21, the guest machine 5 Commit Limit quickly increased to 8 GB, which was sustained for about one hour as the memory soaker program continued to execute, while the number of committed bytes began to approach the partition’s 8-GB allocation limit. After a sustained period when Committed Bytes reached 80% of the Commit Limit, the Windows guest extended the size of its paging file to increase the Commit Limit to about 12 GB around 5:40 pm. 

Note that there are several gaps in the guest machine performance data shown in Figure 18. These gaps reflect intervals in which performance data collection was unavailable due to delays in virtual processor scheduling and synthetic timer interrupts that were deferred by the hypervisor. Excessive contention for the VM Host machine’s physical processors, which as we saw, were all quite busy, causes these delays at the level of the guest machine. The gaps in the measurement data are fairly strong evidence that the physical processors on the VM Host machine are over-committed.

Figure 21. A later view from inside guest machine 5 of Committed Bytes and the guest machine’s Commit Limit after 4 GB of physical memory were freed up. Committed Bytes increases to almost 8 GB, the maximum dynamic memory setting. The Commit limit is boosted to 8 GB and then to 12 GB.

Following this adjustment, the configuration reaches a steady state with Guest Machine 5 running with its maximum dynamic memory allotment of 8 GB. The Commit Limit remains at 12 GB. While Committed Bytes fluctuates due to periodic .NET Framework garbage collection, you can see in Figure 22 that it averages close to the 8 GB physical memory allotment, but with peaks as high as 10 GB. 

Figure 22. A final view from inside guest machine #5, looking at Committed Bytes and the Commit Limit during a period of memory stability. Committed Bytes hovers near the 8 GB mark, while the Commit Limit remains 12 GB.

Figure 23 reverts to the Hyper-V statistics on machine memory allocations, reporting the physical memory allocated to the three active guest machines. Guest machine 5 is allocated close to 8 GB, while between them guest machines 3 and 4 have acquired an additional 1.5 GB of physical memory.

Figure 23. Physical memory allocated to guest machine #5 approaches the 8 GB Maximum Memory setting. The Hyper-V Dynamic Memory Balancer makes minor memory adjustments continuously, even when the workloads are relatively stable.

Figure 24 revisits the hypervisor's view of Memory Pressure for the three guest machines that remained running, each of which is subject to Dynamic Memory management. Compared to Figure 19 where guest machines were feeling the memory pressure, now the Memory Pressure target oscillates around the break-even 100% mark.

Figure 24. Memory Pressure readings for the three remaining active guest machines remains high, which triggers minor Memory Add and Memory Remove operations every measurement interval.

During this measurement interval, guest machine 5 is running with a physical memory allocation at or near its dynamic memory maximum setting. Hyper-V continues to make minor dynamic memory adjustments and the Memory Pressure for all three machines remains high and continues to fluctuate. 

Discussion

As the case study illustrates, Hyper-V adjusts machine memory allocation by attempting to balance a measurement called Memory Pressure across all guest machines running at the same memory priority, adding machine memory to a partition running at a higher Memory Pressure value and removing machine memory using a ballooning technique from a partition running at a lower Memory Pressure value. Memory Pressure is a memory contention index calculated from the ratio of the guest machine’s Committed Bytes divided by its current machine memory allocation. When Memory Pressure increases beyond 100, a guest machine is likely to experience an increased rate of paging to disk, so adding machine memory to the guest machine to prevent that from happening is often an appropriate action for Hyper-V to take.

Memory Pressure is less viable as a memory contention index when applications such as Microsoft SQL Server are allowed to allocate virtual memory up the physical memory limits of the guest machine. Applications like SQL Server also listen for low memory notifications from the Windows OS when the supply of physical memory is depleted due to ballooning. Upon receiving a low memory notification from Windows, SQL Server will release some of the virtual memory allocated in its process address space by releasing some of its buffers. Similarly, the .NET Framework runtime will trigger a garbage collection to release unused space in any of the process address space managed heaps whenever a low memory notification from Windows is received. The combination of Hyper-V dynamic memory adjustments, ballooning inside the guest OS to drive its page replacement algorithm, and dynamic memory management at the process address space level makes machine memory management in Hyper-V very, very dynamic!

With dynamic memory, it becomes possible to stuff more virtual machines into the machine memory footprint of the Hyper-V Host machine, making Hyper-V more competitive with VMware ESX in that crucial area. The minimum physical memory setting of 32 MB grants Hyper-V considerable latitude in reducing the physical memory footprint of an inactive guest machine to a base minimum amount, less than the amount of physical memory a guest machine would need to get revved up again after a period of inactivity.

It is also easy to stuff more virtual machines the machine memory than can safely coexist, with the potential to create performance problems of a serious magnitude. Any guest machine that is attempting to run when the machine memory of the Hyper-V Host is over-committed is apt to face serious consequences. In an earlier discussion of Windows virtual memory management, we have seen that determining the memory requirements of the guest machine workload is a difficult problem, complicated by the fact that the memory requirements may themselves vary based on the time of day, a particular mix of requests, or additional factors that can influence the execution of the workload. 

While acknowledging that memory capacity planning is consistently challenging, I would like to suggest that the Hyper-V dynamic memory capability does open up unique opportunities to deal with the difficulties. With virtualization, you gain the flexibility to size a guest machine to execute in a Guest Physical memory footprint that is difficult to configure on native hardware. When you are able to determine the memory requirements of the workload, with Hyper-V you can configure a maximum dynamic memory footprint for a guest machine that might not exist in your inventory of actual hardware. In the example shown in Figure 4, the dynamic memory maximum is set to 6 GB. If the physical machines available are all configured with 8 GB or more of machine memory, then you are already seeing a practical advantages from running that workload as a Hyper-V guest.

One recommended approach to memory capacity planning is systematically varying the memory footprint of the machine and observing the paging rates, for example, in a load test. This is an iterative, trial-and-error method that is much easier to use with virtualization. 

As we have seen, Dynamic Memory provides the ability to specify a flexible range of machine memory for a guest machine to operate within and then gives the hypervisor the freedom to experiment with memory settings within that range. The Dynamic Memory adjustment mechanism that makes decisions in real-time based on the Memory Pressure exerted by the guest machine is an excellent way to approach sizing physical memory. What’s more, since memory requirements can be expected to vary over time, the Hyper-V dynamic memory capability can also provide the flexibility to deal with this variability effectively. 

To be clear, the Dynamic Memory Balancer does not attempt to settle on a physical memory configuration that is optimal for the guest machine to run in. Optimization based on determining what the physical memory requirements of a workload are remains something the performance analyst still must do. What Hyper-V does instead is balance the amount of physical memory allocated to each partition across all guest machines running at the same memory priority, relative to their physical memory usage – it attempts to equalize the Memory Pressure readings for all the guest machines running in a memory priority band. If the Hyper-V Host machine does not contain enough RAM to service each of the guest machines adequately, the Dynamic Memory Balancer will distributes physically memory in a manner that may cause every guest machine to suffer from a physical memory shortage. Moreover, a guest machine that has the freedom to extend the amount of physical memory it is using to its maximum setting can create a physical memory shortage on the Host machine that will impact other resident guest machines.

Performance risks aside, data centers derive significant benefits from virtualization, including those that arise in activities closely allied to performance, namely, provisioning, scalability and capacity planning. Virtualization brings additional flexibility to both provisioning and capacity planning. In planning for CPU capacity, for example, it is not possible to purchase a machine with three CPUs, but if that is the capacity a workload requires, it certainly is possible to configure a guest partition that way. Dynamic Memory provides the flexibility to specify a range of machine memory for a guest machine to operate within and then gives the hypervisor the freedom to “experiment” with different memory settings within that range. Being able to specify a start-up memory value and a much higher maximum value for guest machine provides configuration flexibility that is very desirable in the absence of good intelligence about how much memory the guest machine really needs.

As noted above, by default, Hyper-V seeks to balance the physical memory allocations across a set of guest machines that are configured to run at the same memory priority. It wisely bases memory adjustments on measurements sent from the guest Windows machine reflecting how much physical memory is currently committed. But the Hyper-V approach to balancing memory allocations across guest machines lacks a goal-seeking mechanism that attempts to find an optimal memory footprint for the workloads.


Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profiling tools that exploit

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe