CPU Priority scheduling options.
In a final set of benchmark results on Guest machine performance when the physical CPUs on the Hyper-V Host are over-committed, we will look now at how effective the Hyper-V processor scheduling priority settings are at insulating preferred guest machines from the performance impact of an under-provisioned (or over-committed) Hyper-V Host machine. The results of two test scenarios where CPU Priority scheduling options were used are compared to the over-committed baseline (and the original native Windows baseline) are reported in the following table:
Configuration
|
#
guest
machines
|
CPUs
per machine
|
Best case elapsed time
|
stretch factor
|
Native
machine
|
…
|
4
|
90
|
…
|
4 Guest
machines (no priority)
|
4
|
2
|
370
|
4.08
|
4 Guests using Relative Weights
|
4
|
2
|
230
|
2.56
|
4 Guest
using Reservations
|
4
|
2
|
270
|
3.00
|
Table 4. Test results when Virtual Processor Scheduling Priority settings are used.
As discussed earlier in this series blog posts, Hyper-V virtual processor scheduling options include allowing you to prioritize the workloads from guest machines that are resident on the same Hyper-V Host. To test the effectiveness of these priority scheduling options, I re-ran the under-provisioned 4 X 2-way guest machine scenario with two of the guest machines set to run at a higher priority, while the other two guests were set to run at a low priority. I ran separate tests to evaluate the virtual processor Reservation settings in one scenario and the use of relative weights in a second scenario.
CPU Scheduling with Reservations.
For the Reservation scenario, the two high priority guest machines reserved 50% of the virtual processor capacity they were configured with. The two low priority guest machines reserved 0% of their virtual processor capacity. Figure 34 shows the Hyper-V Manager’s view of the situation – the higher priority machines 1 & 2 clearly have favored access to the Hyper-V logical processors. The two higher priority guests are responsible for 64% of the CPU usage, while the two low priority machines are consuming just 30% of the processor resources.
The guest machines configured with high priority settings executed to completion in about 270 minutes (or 4 ½ hours). This was about 27% faster than the equally weighted guest machines in the baseline scenario where four guest machines executed the benchmark program without any priority settings in force.
Figure 35 reports on the distribution of the Virtual Processor utilization for the four guest machines executing in this Reservation scenario during a one-hour period. Guest machines 1 & 2 are running with the 50% Reservation setting, while machines 3 & 4 are running with the 0% Reservation setting. Instead of the view in Figure 32 where each guest machine has equal access to virtual processors, the high priority guest machines clearly have favored access to virtual processors. Together, the 4 higher priority virtual processors consumed about 250% out of a total of 400% virtual processor capacity, almost twice the amount of residual processor capacity available to the lower priority guest machines.
Figure 35. Virtual Processor utilization for the four guest
machines executing in the Reservation scenario.
|
Hours later when the two high priority guest machines
finished executing the benchmark workload, those guest machines went idle and
the low priority guests were able to consume more virtual processor capacity.
Figure 36 shows these higher priority guest machines executing the benchmark
workload until about 10:50 pm, at which point the Test 1 & 2 machines go idle
and machines 3 & 4 quickly expand their processor usage.
Figure 36. The higher priority the Test 1 & 2 machines go idle about 10:50 pm, at which point machines 3 & 4 quickly expand their processor usage. |
As Figure 36 indicates, even though the high priority Test machines 1 & 2 are idle, there virtual processors still get scheduled to execute on the Hyper-V physical CPUs. When guest machines do not consume all of the virtual processor capacity that is requested in a Reservation setting, that excess capacity evidently does become available for lower priority guest machines to use.
Figures 37 and 38 show the view of processor utilization available from inside one of the high priority guest machines. Figure 37 shows the view of the virtual hardware that the Windows CPU accounting function provides, plus it shows the instantaneous Processor Ready Queue measurements. These internal measurements indicate that the virtual processors are utilized near 100% and there is a significant backlog of Ready worker threads from the benchmark workload queued for the two virtual CPUs.
Figure 37 shows the % Processor Time counter from the guest machine Processor object, while Figure 38 (below) shows processor utilization for the top 5 most active processes, with the ThreadContentionGenerator.exe – the benchmark program – dominating, as expected.
Figure 38. The benchmark program ThreadContentionGenerator.exe consumes all the processor cycles available to the guest machine. |
CPU Scheduling with Relative Weights.
A second test scenario used Relative Weights to prioritize the guest machines involved in the test, leading to results very similar to the Reservation scenario. Two guest machines were given high priority scheduling weights of 200, while the other two guest machine were given low priority scheduling weights of 50. This is the identical weighting scheme discussed in an earlier post that described setting up CPU weights. Mathematically, the proportion of each virtual processor allocated for the higher priority guest machines was 80%, with 20% of the processor capacity allocated to the lower priority guests. In actuality, Figure 39 reports each high priority virtual processor consuming about 75% of a physical CPU, while the four lower priority virtual processors consumed slightly more than 20% of a physical CPU.
Since the higher priority guest machines were able to consume more processor time than in the Reservation scenario, the higher priority machines were able to complete the benchmark task in 230 minutes, faster than the best case in the Reservation scenario and about 38% faster than the baseline scenario where all four guests ran at the same Hyper-V scheduling priority.
The CPU usage pattern in Figure 40 showing this shift taking place during the Relative Weights scenario shows some similarity to the Reservation scenario shown in Figure 36, with one crucial difference. With Reservations, Hyper-V still schedules the high priority virtual processors that are not active to execute, so they continue to consume some virtual processor execution time. Using Relative Weights, the virtual processors that are idle are not even scheduled to execute, so the lower priority guest machines get more juice. Comparing the best case execution times for the higher priority machines in the two priority scheduling scenarios, the Relative Weights scheme also proved superior.
All this is consistent with the way capacity reservation schemes trend to operate – so long as the high priority workload does not consume all of the capacity reserved for its use, some of that excess reserved capacity is simply going to be wasted. But also consider that the measure of performance that matters in these tests is throughput-oriented. If performance requirements are oriented instead around responsiveness, the CPU Reservation scheme in Hyper-V should yield superior results.
Next: Guest Machine performance monitoring
Having seen that the performance counters associated with virtual processor usage gathered by Windows guest machines running Hyper-V (or VMware ESX, for that matter) are supplanted by the performance usage data reported by the hypervisor, you may have noticed I have shown in several places where guest machine performance counters proved useful in understanding what is going on in the virtualized infrastructure. In the next series of blog posts, we will step back and see how guest machine performance counters are impacted by virtualization in more general terms. I will discuss discuss which performance measurements guest machines produce remain viable for diagnosing performance problems and understanding capacity issues. Depending on the type of performance counter, we will see the impact of the virtualization environment varies considerably.
Figure 40. When the higher priority virtual processors for guest machines 1 & 2 finish processing 1:40 pm, the processor usage by the lower priority virtual processors accelerates. |
All this is consistent with the way capacity reservation schemes trend to operate – so long as the high priority workload does not consume all of the capacity reserved for its use, some of that excess reserved capacity is simply going to be wasted. But also consider that the measure of performance that matters in these tests is throughput-oriented. If performance requirements are oriented instead around responsiveness, the CPU Reservation scheme in Hyper-V should yield superior results.
Next: Guest Machine performance monitoring
Having seen that the performance counters associated with virtual processor usage gathered by Windows guest machines running Hyper-V (or VMware ESX, for that matter) are supplanted by the performance usage data reported by the hypervisor, you may have noticed I have shown in several places where guest machine performance counters proved useful in understanding what is going on in the virtualized infrastructure. In the next series of blog posts, we will step back and see how guest machine performance counters are impacted by virtualization in more general terms. I will discuss discuss which performance measurements guest machines produce remain viable for diagnosing performance problems and understanding capacity issues. Depending on the type of performance counter, we will see the impact of the virtualization environment varies considerably.
Comments
Post a Comment