Skip to main content

Hyper-V Performance expectations

Hyper-V Performance expectations


Before drilling deeper into the Hyper-V architecture and discussing the performance impacts of virtualization in some detail, it will help to put the performance implications of virtualization technology in perspective. Let’s begin with some very basic performance expectations.

Of the many factors that persuade IT organizations to configure Windows to run as a virtual machine guest under either VMware or Microsoft’s Hyper-V, very few pertain to performance. To be sure, many of the activities virtualization system administrators perform are associated with the scalability of an application, specifically, provisioning a clustered application tier so that a sufficient number of guest machine images are active concurrently. Of course, the Hyper-V Host machine and the shared storage and networking infrastructure that is used must be adequately provisioned to support the planned number of guests. Provisioning the Hyper-V Host machine also extends to ensuring that the guest machines configured to run on the host machine don’t overload it. Finally, virtualization administrators must ensure that guest machines are configured properly to utilize the physical resources that are available on the host machine. Under Hyper-V, this entails making sure an adequate number of virtual processors are available to the guest machine, as well as an appropriate machine memory footprint. This aspect of provisioning guest machines properly is, perhaps, more aptly characterized as capacity planning. It is certainly critical to the performance of the applications executing inside those guest machines that they are configured properly and the Hyper-V host machine where they reside is adequately provisioned.

Instead of under-provisioning, the IT organization frequently employs virtualization to utilize server hardware that is often massively over-provisioned with respect to individual guest machine workloads. Virtualization allows IT to consolidate multiple servers and workstations on a single piece of high-end hardware, with the object of making significantly more efficient and more effective use of that hardware. Data center server hardware that is provisioned with high end network adapters and connections to a high speed SAN disks (either flash memory or conventional drives) is expensive on a grand scale, so the cost benefit of packing guest machines more tightly into a physical footprint is considerable. Meanwhile, there is always the temptation to overload the VM Host, which runs the risk that the virtualization host machine could become under-provisioned. Under-provisioning has serious implications for the virtualization environment. Whenever the Hyper-V Host machine is under-provisioned for the workload presented by the guest machines it is hosting, any performance problems that result are only too likely to impact all of the guest machines that are resident on that host machine.

The best way to think about the performance of applications running under virtualization is that you have traded off some, hopefully, minimal amount of performance for the benefit of utilizing data center hardware more efficiently. Let's be crystal clear: compared to a native machine running Windows, you can count on the fact that virtualization will always have some impact on the performance of your applications running inside a Windows virtual machine (VM) guest. The impact may in fact be minor, in extreme cases, barely detectable, especially when the VM Host machine is over-provisioned with respect to the physical resources that VMs utilize from the underlying Host machine that are available to be granted to the guest VM. In other circumstances, where the guest machine does not have access to sufficient resources on the underlying Host, the performance impact of virtualization can be severe!

In the next set of blog posts, we are going to look at specific examples of this trade-off, comparing the performance of a benchmarking application

  1. on a native machine, 
  2. on an amply provisioned Hyper-V Host, and 
  3. on a Hyper-V Host machine is significantly under-provisioned, relative to the guest machine workloads it is configured to support

The key skill for performance analysts that needs cultivating is the ability to recognize an under-provisioned Hyper-V Host, or, better yet, proactively anticipate the problem and re-balance and redistribute the workload better to avoid future performance problems.
Given the capacity of today’s data center hardware, under-provisioning is often not a big concern. But when performance problems due to lack of capacity on the physical machine do occur, diagnosing and fixing them is challenging work. Typically, diagnosing this type of performance problem requires visibility into both the Hyper-V performance counters and performance measurements from the guest machines. When there is contention for storage or network resources, you may also need access to performance data from those subsystems, many of which are shared across multiple Hyper-V or VMware Hosts.

Figuring out why a guest machine does not have access to adequate resources in a virtualized environment can also be quite difficult. Some of this difficulty is due to sheer complexity: it is not unusual for the Host to be executing a dozen or more guest machines, any one of which might be contributing to overloading a shared resource, e.g., a disk array attached to the Host machine or one of its network interface cards. The guest VM might also be facing a constraint where the configuration of the virtual machine restricts its access to the physical processor and memory resources it requires. Another source of difficulty is that the virtualization environment distorts the measurements that are produced from inside the guest machine that, under normal circumstances, would be used to understand the hardware requirements of the workload.

Fortunately, the configuration flexibility that is one of the main benefits of virtualization technology also often provides the means to deal rapidly with many performance problems that arise when guest machines are configuration-constrained or otherwise severely under-provisioned. With virtualization, you have the ability to spin up a new guest machine quickly and then add it non-disruptively to an existing cluster of front-end web servers, for example. Or you might be able to use live migration to relieve a capacity constraint in the configuration by moving one or more workloads to a different Hyper-V Host machine.

Before I show some of these benchmark results, you need to know a little more about how Hyper-V works, which is the subject of the next blog post in this series.


Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead