Skip to main content

Presenting two sessions at the upcoming UKCMG meeting in Oxford, England on May 14-15.

Some news that regular readers of this blog might be interested in hearing about...

I plan to present two sessions at the upcoming UKCMG annual conference, which is being held this year on May 14 & 15 at the Oxford Belfry on the outskirts of Oxford, England.
The first presentation is a repeat performance of the one I gave at the US CMG in December, a paper entitled  “Measuring Processor Utilization in Windows and Windows applications,” essentially pulling together the series of blog entries I have been posting here, beginning with the first installment, but with a good deal more material than I have gotten around to posting to the blog.
For instance, the last blog post discussing the high resolution clocks and timer facilities in Windows leads directly to a consideration of what happens to the various CPU utilization measurements when Windows is running as virtual guest under VMware or Hyper-V. That discussion is in the paper, but, unfortunately, hasn’t made it to the blog yet.
But you can download the full paper from my company’s web site here.
It is shameful to admit that the full paper has been available since December. Inept as I am at blogging, I had not alerted you blog readers about its availability. Unfortunately, and it will be forever thus, or at least until I retire from my day job, self-publishing on this blog takes a back seat to work that actually pays the bills around here.
(I will resist the temptation to go off on a rant here about the idiotic and naïve notion expounded by fanatical proponents of Open Source technology that information should be free. That’s a wonderful ideal state, of course, but flies in the face of the economics of information gathering, production, storage and dissemination, which has real costs associated with it. Even in the digital age, which has revolutionized the costs associated with information storage and dissemination, these costs remain and they are considerable. My contrarian view is that no one, other than gods and saints, in possession of potentially valuable information is apt to give it away for free under our system of capitalism, but that is another topic entirely.)
Workshop in Web Application Performance and Tuning.
The second session is an extended workshop on web application performance. It is focused on Windows technology (IIS, ASP.NET, AJAX, etc.), but many of the tools and techniques discussed are directly applicable to other web hosting platforms.
The workshop is based on a course that I used to give in-house back in Microsoft to the developers working on various Microsoft web-based applications. While I have published very little on this topic over the years, it has actually been the focus of much of my software development work over the past five years or so. I do expect to start publishing something soon on the subject, especially as I am in the late stages of developing a new software tool aimed squarely at Microsoft web application performance.
Reading between the lines of some of my recent blog postings that are ETW-oriented, including the CPU measurement series, you would be correct in guessing that the new tool attempts to leverage ETW trace events, specifically, in this case, the events that instrument the Microsoft IIS web server and the TCP/IP networking stack. This new trace analysis tool also correlates these system-oriented trace events from various Windows components with events issued from inside application scenarios instrumented using the Scenario class library (a free component, currently posted in the MSDN Archive here).
Instrumenting your application for performance monitoring is a crucial step, and that is where the Scenario class library comes in. Originally, I conceived of the Scenario instrumentation library as a .NET flavor of the open source Application Response Measuriment (ARM) initiative that was championed by both HP and IBM (and supported by the US CMG, where I was the ARM Committee liaison for many years). Soon after I arrived at Microsoft, it quickly became apparent that I needed to adapt my original conception to leverage ETW tracing technology, which had the considerable weight of the Windows Fundamentals organization behind it.
In the workshop I explain how to use this application-oriented instrumentation as part of integrating software performance engineering best practices into the software development life cycle. This involves first setting performance goals around key application scenarios that you’ve identified, and then instrumenting those scenarios to determine whether or not the application as delivered for testing is actually capable of meeting those goals. The instrumentation can also safely be embedded in the application when it is ultimately deployed in production. This is fundamentally necessary to enable service level reporting and verify, for example, that the app is meeting its stated performance objectives. Most ARM advocates concentrate on monitoring application performance in production, but tend to neglect the crucial earlier stages of application development where it is important to bake goal-oriented, performance monitoring in at the outset.
The new Windows performance tool is currently in a very limited beta release, and contrary to the negative views I expressed in my earlier aside -- not a rant -- about information being free, we are looking at some sort of freebie distribution of the initial “commercial” version of the tool to allow you guys to explore the technology and see what it can do for you.
So, if you happen to be in the neighborhood of Oxford, England next month, you can hear & see more about this initiative. In the meantime, stayed tuned to this space, where I will try to do a better job keeping you posted as we make progress in this area.

Comments

Popular posts from this blog

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead...

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe...

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profi...