Skip to main content

Hyper-V Processor performance monitoring

Virtual Processor performance monitoring

When Hyper-V is running and controlling the use of the machine CPU resources, to monitor processor utilization by the hypervisor, the Root partition, and guest machines running in child partitions, you must access the counters associated with the Hyper-V Hypervisor. These can be gathered by running a performance monitoring application on the Root partition. The Hyper-V processor usage statistics need to be used instead of the usual Processor and Processor Information performance counters available at the OS level.

There are three sets of Hyper-V Hypervisor processor utilization counters and the key to using them properly is to understand what entity each instance of the counter set represents. The following three Hyper-V processor performance counter sets are provided:

  • ·         Hyper-V Hypervisor Logical Processor

There is one instance of HVH Logical Processor counters available for each hardware logical processor that is present on the machine. The instances are identified as VP 0, VP 1, …, VP n-1, where n is the number of Logical Processors available at the hardware level. The counter set is similar to the Processor Information set available at the OS level (which also contains an instance for each Logical Processor). The main metrics of interest are % Total Run Time and % Guest Run Time. In addition, the % Hypervisor Run Time counter records the amount of CPU time, per Logical processor, consumed by the hypervisor.

  • ·         Hyper-V Hypervisor Virtual Processor

There is one instance of the HVH Virtual Processor counter set for each child partition Virtual Processor that is configured. The guest machine Virtual Processor is the abstraction used in Hyper-V dispatching. The Virtual Processor instances are identified using the format guestname: Hv VP 0, guestname: Hv VP 1, etc., up to the number of Virtual Processors defined for each partition. The % Total Run Time and the % Guest Run Time counters are the most important measurements available at the guest machine Virtual Processor level.

CPU Wait Time Per Dispatch is another potentially useful measurement indicating the amount of time a guest machine Virtual Processor is delayed the hypervisor Dispatching queue, which comparable to the OS Scheduler’s Ready Queue. Unfortunately, it is not clear how to interpret this measurement. Not only are the units are undefined – although 100 nanosecond timer units are plausible – but the counter reports values that are inexplicably discrete.

The counter set also includes a large number of counters that reflect the guest machine’s use of various Hyper-V virtualization services, including the rate that various intercepts, interrupts and Hypercalls are being processed for the guest machine. (These virtualization services are discussed in more detail in the next section.) These are extremely interesting counters, but be warned they are often of little use in diagnosing the bulk of capacity-related performance problems where either (1) a guest machine is under-provisioned with respect to access to the machine’s Logical Processors or (2) the Hyper-V Host processor capacity is severely over-committed. This set of counters is useful in the context of understanding what Hyper-V services the guest machine consumes, especially if over-use of these services leads to degraded performance. Another context where they are useful is understanding the virtualization impact on a guest machine that is not running any OS enlightenments.
  • ·         Hyper-V Hypervisor Root Virtual Processor

The HVH Root Virtual Processor counter set is identical to the metrics reported at the guest machine Virtual Processor level. Hyper-V automatically configures a Virtual Processor instance for each Logical Processor for use by the Root partition. The instances of these counters are identified as Root VP 0, Root VP 1, etc., up to VP n-1.

Table 1.

Hyper-V Processor usage measurements are organized into distinct counter sets for child partitions, the Root partition, and the hypervisor.


Counter Group
# of instances
Key counters
HVH Logical Processor
# of hardware logical processors


VP 0, VP 1, etc.
% Total Run Time


% Guest Run Time
HVH Virtual Processor
# of guest machines X
virtual processors


guestname: Hv VP 0, etc.
% Total Run Time


% Guest Run Time


CPU Wait Time Per Dispatch


Hypercalls/sec


Total Intercepts/sec


Pending Interrupts/sec
HVH Virtual Processor
# of hardware logical processors


Root VP 0, Root VP 1, etc.
% Total Run Time


Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead