Over-provisioned Hyper-V Hosts
It is easy to recognize a generously over-provisioned Hyper-V Host machine – its processors are underutilized and machine memory is not fully allocated.
When the machine’s logical CPUs are seldom observed running in excess of 25-40% busy, there is ample CPU capacity for all its resident guest machines, especially considering that most Hyper-V Host machines are multiprocessors.
(Note: the mathematics of this is straightforward. The dispatching of a guest machine virtual processor is delayed when all CPUs are busy, so it is forced to wait. In a symmetric multiprocessor, the probability that all CPUs are busy simultaneously is the joint probability that all the processors are busy. For example, if there are four CPUs and each CPU is busy 25% of the time, the joint probability of all the CPUs being busy simultaneously is 0.25 * 0.25 * 0.25 * 0.25, or 0.004. Thus, the probability that all CPUs are simultaneously busy is only ~0.4%.)
Machine memory can safely be regarded as underutilized if more than 40% of it is Available for allocation by the hypervisor and no guest machine is running at its maximum Dynamic Memory setting.
When Hyper-V Host machines are over-provisioned, the performance of guest machine applications approaches the level of native hardware.
The only problem with over-provisioned VM Host machines is that they are not economical. Over-provisioning on a wide enough scale often leads an initiative to increase the degree of sever consolidation by trying to pack more guest machines into the existing virtualization infrastructure.
Discussing over-committed Hyper-V Host machines and under-provisioned guest machines is much more interesting because that is when serious performance problems can occur. The first set of benchmarking runs reported below is directed at showing how these two conditions can be characterized based on various Hyper-V and Windows OS performance measurements.
Several of the benchmarking runs discussed here and in subsequent posts deliberately overload the Hyper-V Host processors or the Host machine’s memory footprint and then look at the Hyper-V and guest machine performance counters that characterize those overloaded conditions. As we have seen, you can implement virtual processor and dynamic memory priority settings to try and protect higher priority workloads from this degradation. Additional benchmarking runs involving physical resources being over-committed reveal how effective these Hyper-V virtual processor and machine memory priority settings are in shielding higher priority workloads from the performance impact when the Hyper-V Host is overloaded.
Scaling up and scaling out
While improving application performance is not among them, virtualization hardware and software technology has evolved to the point where it currently provides compelling benefits in modern data center operations. For example, one of the clear trends in hardware manufacturing that favors virtualization is building more powerful multiprocessor cores, not faster individual processors. CPU manufacturers have resorted to packaging more processors on a chip rather than making processors faster because increases in clock speeds lead to disproportionate increases in power consumption, which also ramps up the amount of heat that has to be dissipated. In semiconductor fabrication, the manufacturers have encountered a “power wall” that resists other engineering solutions.
A second factor that promotes virtualization is industry Best Practices that lead to building and deploying Windows machines that are dedicated to performing a single role, whether they are explicit Server roles or more general purpose desktop and portable workstations handling diverse personal computing tasks. A related practice is the Technical Support group within the IT organization building and then certifying for distribution one or more stable images of the operating system and the application software installed on top of it after a lengthy period of comprehensive Acceptance Testing. This stable image is then cloned each time there is an organizational need to support another copy of this application. Virtualization software that can deploy new copies of these system images through rapid cloning of virtual machines – a process that can also be automated – adds valuable flexibility to data center operations.
Most of the virtual machines configured to handle a single server role are clearly not well matched against the powerful capabilities of the data center machines they would be deployed to. Without virtualization, these data center machines would often be massively over-provisioned if they were only capable of running an individual Windows Server workload. Virtualization technology offers relief from this conundrum, a convenient way to consolidate many of these individual workloads on a single piece of equipment. Essentially, virtualization technology provides a flexible, software-based mechanism that allows system administrators to utilize current hardware more effectively while retaining all the administrative advantages of isolating workloads on dedicated servers.
Still, spinning up a new guest machine from the standard server or workstation Build is not the only possible response to each new request for IT services. There are viable alternatives, including allowing a single instance of IIS, for example, to host multiple application web sites or installing multiple instances of SQL Server on a production or test machine. IT professionals are sometimes reluctant to choose these configuration alternatives because they are concerned about the performance risks associated with multiple web servers sharing a single machine image, for example. Of course, these performance risks do not magically disappear when multiple guest machines are provisioned instead. The problem of over-committing shared computer resources is merely elevated to the level associated with Hyper-V administration.
Finally, having firmly established itself as an integral part of large scale data center operations, virtualization technology continues to evolve other virtual machine management capabilities, including replication, live migration, dynamic load balancing, automatic failover and recovery. The flexibility that virtualization solutions also provide in being able to provision a new machine quickly can benefit the performance of workloads that are running up against capacity limits in their current configuration and need to scale out across multiple machines in an application cluster to achieve higher levels of throughput.
Next: Benchmark results
Comments
Post a Comment