Skip to main content

Virtual memory management in VMware: Swapping

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is here.


Swapping

VMware has recourse to steal physical memory pages granted to a guest OS at random, which VMware terms swapping, to relieve a serious shortage of machine memory. When free machine memory drops below a 4% threshold, swapping is triggered. 

During the case study, VMware resorted to swapping beginning around 9:10 AM when the Memory State variable reported a memory state transition to the “Hard” memory state, as shown in Figure 19. Initially, VMware swapped out almost 600 MB of machine memory granted to the four guest machines. Also, note that swapping is very biased. The ESXAS12B guest machine was barely touched, while at one point 400 MB of machine memory from the ESXAS12E machine was swapped out.
Figure 19. VMware resorted to random page replacement – or swapping – to relieve a critical shortage of machine memory when usage of machine memory exceeded 96%. Swapping was biased – not all guest machines were penalized equally.

Given how infrequently random page replacement policies are implemented, it is surprising to discover they often perform reasonably well in simulations, although they still perform much worse than stack algorithms that order candidates for page replacements based on Least Recently Used criteria. Because VMware selects pages from a guest machine’s allotted machine memory for swapping at random, it is entirely possible for VMware to remove truly awful candidates from the current working set of a guest machine’s machine memory pages using swapping. With random page replacement, some worst case scenarios are entirely possible. For example VMware might to choose to swap out a frequently referenced page that contains code from the operating system kernel or Page Table entries, pages that the guest OS would be among the pages least likely to be chosen for page replacement.

To see how effective VMware’s random page replacement policy is, the rate of pages swapped out were compared to the swap-in rate. This comparison is shown in Figure 20. There were two large bursts of swap out activity, the first one taking place at 9:10 AM when the swap out rate was reported at about 8 MB/sec. The swap-in rate never exceeded 1 MB/sec, but a small amount of swap-in activity continued to be necessary over the next 90 minutes of the benchmark run, until the guest machines were shut down and machine memory was no longer over-committed. In clustered VMware environments, the vMotion facility can be invoked automatically to migrate a guest machine from an over-committed ESX Host to another machine in the cluster that is not currently experiencing memory contention. This action may relieve the immediate memory over-commitment, but may also succeed in simply shifting the problem to another VM Host.

As noted in the previous blog entry, the benchmark program took three times longer to execute when there was memory contention from all four active guest machines, compared to running in a standalone guest machine. Delays due to VMware swapping were certainly one of the important factors contributing to elongated program run-times.

Figure 20. Comparing pages swapped out to pages swapped in.
This entry on VMware swapping concludes the presentation of the results of the case study that stressed the virtual memory management facilities of an VMware ESX host machine. Based on an analysis of the performance data on memory usage gathered at the level of both the VMware Host and internally in the Windows guest machines, it was possible to observe the virtual memory management mechanisms used by VMware in operation very clearly. 

With this clearer understanding of VMware memory management in mind, I'll discuss some of the broader implications for performance and capacity planning of large scale virtualized computing infrastrucures in the next (and last) post in this series.

Comments

Popular posts from this blog

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed in the previous post, there are few professional-grade, application response time monitoring and profiling tools that exploit the …

Why is my web app running slowly? -- Part 1.

This series of blog posts picks up on a topic I made mention of earlier, namely scalability models, where I wrote about how implicit models of application scalability often impact the kinds of performance tests that are devised to evaluate the performance of an application. As discussed in that earlier blog post, sometimes the influence of the underlying scalability model is subtle, often because the scalability model itself is implicit. In the context of performance testing, my experience is that it can be very useful to render the application’s performance and scalability model explicitly. At the very least, making your assumptions explicit opens them to scrutiny, allowing questions to be asked about their validity, for example.
The example I used in that earlier discussion was the scalability model implicit when employing stress test tools like HP LoadRunner and Soasta CloudTest against a web-based application. Load testing by successively increasing the arrival rate of customer r…

Virtual memory management in VMware: memory ballooning

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is here.


Ballooning
Ballooning is a complicated topic, so bear with me if this post is much longer than the previous ones in this series.

As described earlier, VMware installs a balloon driver inside the guest OS and signals the driver to begin to “inflate” when it begins to encounter contention for machine memory, defined as the amount of free machine memory available for new guest machine allocation requests dropping below 6%. In the benchmark example I am discussing here, the Memory Usage counter rose to 98% allocation levels and remained there for duration of the test while all four virtual guest machines were active.

Figure 7, which shows the guest machine Memory Granted counter for each guest, with an overlay showing the value of the Memory State counter reported at the end of each one-minute measurement interval, should help to clarify the state of VMware memory-managemen…