Skip to main content

Understanding Guest Machine Performance under Hyper-V: Introduction

This is a continuation of a series of articles on Microsoft's Hyper-V virtualization technology. If you want to start at the beginning, click here.

The last few posts in this series examined the Hyper-V Dynamic Memory feature, which, IMHO, is probably the most significant technical difference between Hyper-V and VMware ESX.

Most of the guest machine performance considerations that are discussed beginning in this post are common to every virtualization technology I have worked with, including VMware and Hyper-V.

Here goes...

Understanding Guest Machine Performance under Hyper-V

Virtualization technology requires the execution of several additional layers of systems software that adds overhead to many functional areas of a guest Windows machine, including
  • processor scheduling,
  • intercepting and emulating certain guest machine instructions that could violate the integrity of the virtualization scheme, 
  • machine memory management,
  • initiating and completing IO operations, and
  • synthetic device interrupt handling.
The effect of virtualization in each one of these areas of execution is to impart a performance penalty, and this applies equally to VMware, Zen, and other flavors of virtualization that are available. Windows guest machine enlightenments that are exclusive to Hyper-V guest machines serve to reduce some of the performance penalties associated with virtualization, but they cannot eliminate all of it.

This is the foundation for at least some of the evidence that can be marshalled to support the statement made at the beginning of this post that virtualization always impacts the performance of a Windows application negatively, particularly its responsiveness. Individually, executing these extra layers of software adds a small amount of overhead every time one of these functional areas is exercised. Added together, these additional overhead factors are significant enough to take notice of. But the real question is whether they are substantial enough to actively discourage data centers from adopting virtualization technology, given its benefits in many operational areas. I have already suggested a preliminary answer, which is “No, in many cases the operational benefits of virtualization often far outweigh the performance risks.” Still, there are many machines that remain better off being configured to run on native hardware. Whenever maximum responsiveness and/or throughput is required, native Windows machines reliably outperform Windows guest machines executing the same workload.

With its ability to clone new guest machines quickly, virtualization technology is often used to enhance the scalability and performance of an application that requires a cluster of Windows machines to process. Virtualization can make scaling up and scaling out such an application easier. However, there are other ways to cluster machines to achieve the same scaling up and scaling out improvements without incurring the overhead of virtualization.

Performance risks. 

The configuration flexibility that virtualization provides is accompanied by a set of risk factors that expose virtual machines to potential performance problems that are much more serious in nature than the additional overhead considerations enumerated above. These performance risks need to be understood by IT professionals charged with managing the data center infrastructure. The most serious risk that you will encounter is the ever-present danger of over-loading the Hyper-V Host machine, which leads to more serious performance degradation than any of the virtualization “overheads” mentioned above. Shared processors, shared memory and shared devices introduce opportunities for contention for those physical resources among guest machines that would not otherwise be sharing those components if allowed to run on native hardware. The added complexity of administering the virtualization infrastructure with its more ubiquitous level of resource sharing is a related risk factor.

When a Hyper-V Host machine is overloaded, or over-committed, all its resident guest machines are apt to suffer, but isolating them so they share fewer resources, particularly disk drives and network adaptors, certainly helps. However, shared CPUs and shared memory are inherent in virtualization, so achieving the same degree of isolation with regard to those resources is more difficult, to say the least. This aspect of resource sharing is the reason Hyper-V has virtual processor scheduling and dynamic memory management priority settings, and we will need to understand when to use these settings and how effective they are. In general, priority schemes are only useful when a resource is over-committed, essentially an out-of-capacity situation. This creates a backlog of work – a work queue – that is blocked from executing and is not making forward progress. Priority sorts the work queue, allowing more of the higher priority work to get done, at the expense of lower priority workloads, which are penalized, potentially in such an extreme fashion that they are subject to outright starvation. Like any other out-of-capacity situation, the ultimate remedy is not priority, but finding a way to relieve the capacity constraint. With a properly provisioned virtualization infrastructure, there should be a way to move guest machines from an over-committed VM Host to one that has spare capacity.

Somewhere between over-provisioned and under-provisioned is the range where the Hyper-V Host is efficiently provisioned to support the guest machine workloads it is configured to run. Finding that balance can be difficult, given constant change in the requirements of the various guest machines.

Finally, there are also serious performance risks associated with guest machine under-provisioning, where the VM Host machine has ample capacity, but one or more child partitions is constrained by its virtual machine settings from accessing enough of the Hyper-V Host machine’s processor and memory resources it requires.

Table 2 summarizes the four kinds of Hyper-V configurations that need to be understood from a cost/performance perspective, focusing on the major performance penalties that can occur.


Table 2.
Performance consequences of over or under-provisioning the VM Host and its resident guest machines.


Condition
Who suffers a performance penalty
Over-committed VM Host
All resident guest machines suffer
Efficiently provisioned VM Host
No resident guest machines suffer
Over-provisioned VM Host
No guest machines suffer, but hardware cost is higher than necessary
Under-provisioned Guest
Guest machine suffers

Beginning first with over-provisioned Host machines, which is by far the most common scenario I encounter, in the next set of posts, I will make an effort to characterize the performance profile of each configuration.


Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead