Skip to main content

Plug-and-Play devices on Windows Tablets


In the last post on Windows 8 and the new Windows Runtime libraries for Windows Store apps, I mentioned that the key deliverable in the new version of the Windows OS is the port to the ARM platform. In this post, I will discuss the implications of Windows running on ARM, emphasizing the impact of “plug-and-play” device driver technology. In porting the core of the OS to the ARM platform, Microsoft was careful to preserve the interfaces used by device driver developers, ensuring that there was a smooth transition. Microsoft wanted to allow customers to be able to attach most of the peripherals they use today on a Windows 7 machine to any ARM-based tablet running Windows 8.

What is ARM?

In discussing the Windows 8 port to the ARM platform with some folks, I noticed that not everyone is familiar with the underlying hardware, that it runs a different instruction set than Intel-based computers, that it is not Intel-compatible, etc. So, let’s start with a little bit about the ARM hardware itself.

ARM – the acronym originally stood for Advanced RISC Machine – is a processor architecture specification that is designed by the ARM consortium and then licensed by its members, who then build them. Members of the consortium work together to devise the ARM standard and move it forward. By any measure, in the marketplace today ARM has a reach that is impressive. According to the ARM web site, at least 95% of all mobile phones – not just smartphones – are powered by ARM microprocessors. In 2010, six billion microprocessors based on ARM designs were built. If you own a recent model coffee maker that sports a programmable, electronic interface, you are probably talking to an ARM microprocessor.

So, ARM refers to the processor architecture, an “open” standard of sorts, open, at least, to any hardware manufacturing company willing to pay to license the ARM IP and designs from the consortium – which runs you several million dollars, plus royalties on every unit you build. The ARM processor specification, which is based on RISC principles, is distinct from the manufacturing of ARM chips. Overall, there are currently about 20 manufacturers that build ARM-based computers, with companies like Qualcomm and NVIDIA leading the charge.

Another term associated with devices like the NVIDIA Tegra that powers the Surface is System on a Chip (SoC). In the case of the NVIDIA chip, that entails embedding the ARM microprocessor on a single silicon wafer that contains pretty much everything a mobile computer might need – a graphics processor (NVIDIA’s specialty), audio, video, imaging, etc. Or, if you prefer an integrated SoC design optimized for telephony, you might decide to go with the Qualcomm version. The key is that the software you build for the phone can also run on an ARM tablet because the underlying processor instruction set is compatible.

I blogged last year that ARM technology and the consortium of manufacturers that have adopted ARM designs have emerged as the first credible challenge to the Wintel hegemony that has dominated mainstream computing for the last twenty years. A year later, that prediction looks better and better. From almost every perspective today, ARM looks like it is winning.

ARM’s recent success is reflected in the relative financial results of both Microsoft and Intel, compared to Apple and Qualcomm, for example. Microsoft recently reported revenues slipped by 8% in its latest quarter, while Intel sales were down about 5%. The forecast for PC sales is down, as I mentioned in an earlier post, as more people are opting to buy tablets instead. Meanwhile, Apple posted “disappointing” financial results for the quarter because sales of iPads “only” increased by 26%. Overall, revenue at Apple increased by 27% in its most recent quarterly earnings report. Sales of iPhones were up 58%, compared to last year, with Apple apparently having some difficulty keeping up with the demand.  

All of which makes Windows 8 a very important release for Microsoft. Windows 8 needs to offer a credible alternative to Apple and Android phones and tablets, blunting their drive to dominate this market. It is an open issue whether Windows 8 is good enough to do that. My guess is “yes” for tablets, but “no” for phones. Windows OEMs like Lenovo, HP and Dell are rushing to bring machines that exploit the Windows 8 touch screen interface to market. Microsoft is hoping that’s Windows’ long term policy of being open for all sorts of hardware peripherals – devices that “plug and play” in Windows --  plug into Windows PCs will provide a major advantage in the emerging market for tablets.

Plug and Play devices

As I discussed in the last blog entry, you can buy an ARM-based tablet like the new Microsoft Surface, but it is only capable of running applications built on top of Windows RT. Picture the architecture of Windows 8, for example, which looks like the block diagram in Figure 1:
 
 

Figure 1. The Windows Runtime (aka Windows RT) is a new API layer on top of existing Win32 OS interfaces that developers must target in order to build a Windows Store app that can run on Windows 8 ARM-based tablets, which are limited to supporting Windows RT. As illustrated, a Windows Store app can also call into a limited subset of existing Win32 interfaces that have not been fully converted in Windows 8.
 

The set of OS changes associated with Windows 8 are highlighted in the upper right corner of the block diagram in Figure 1: the new Windows Runtime API layer that was added, spanning a significant subset of the existing Win32 API that Windows applications call into to use OS functions. Examples of Win32 APIs that Windows applications ordinarily need to call include accessing the keyboard, mouse, display, touch screen, operate audio components of the machine, etc. Windows 8 Store apps that can run on ARM processors must limit themselves to calling into the Windows Runtime APIs, except for a small number of selected Win32 APIs, like the COM APIs, that are permitted.
 
Figure 1 is modeled on the diagrams used in chapter 2 in Mark Russinovich’s  most recent Windows Internals book that I have updated to reflect the new Windows Runtime layer. (Windows Internals is essential reading for anyone interested in developing a device driver for Windows, or just wants to understand how this stuff works.) It is a conventional view of how the Windows OS is structured. It shows the core components of the OS, generally associated with the Windows Executive, the OS kernel, and the HAL. The OS kernel, for example, manages process address space, creation threading, and thread dispatching. The OS kernel is also responsible for managing system memory, both physical memory and the virtual memory address space built for each executing process. At the heart of the OS kernel are a set of synchronization primitives that are used to ensure that, for instance, the same block of physical memory is only allocated to one process address space at a time.  
 
Kernel mode is associated with a hardware level that allows privileged mode instruction to be executed. An example of a privileged mode instruction is one that is reserved for the OS to use to switch the processor from executing code inside one thread to code in another. An essential core service of an OS is to function as a traffic cop, managing shared resources such as the machine’s CPUs and its memory on behalf of the consumers – threads and processes, respectively – of those resources.
 
Before moving onto the next set of OS components, I should mention the HAL, or Hardware Abstraction Layer, a unique feature of Windows designed to insulate the rest of the OS from specific processor architecture dependencies. It hides hardware-specific interfaces like the way the processor hardware implements processing of interrupts from attached devices, handles errors like a thread accessing a memory location in a page that doesn’t belong to it, or context switching. These are all functions that processors handle, but different hardware platforms tend to do them in a slightly different manner. Consolidating hardware dependent code that has to be written in the machine’s assembly language in the HAL makes it relatively easy to port Windows to a new processor architecture. To port Windows to the ARM processor, for example, Microsoft first needed to develop a version of the HAL specific to the ARM architecture, and then build a cross-compiler that knows how to translate native C code into valid ARM instructions to generate the rest of the OS. I am making the port to ARM sound a whole lot easier than I am sure it was, but over the years the HAL has enabled Windows to be ported relatively easily to run on a wide range of hardware, including the Digital Alpha, the PowerPC, Intel IA-64 (the Itanium), and the AMD64 (which Intel calls x64).
 
Figure 1 also illustrates the device drivers in Windows. I mentioned that the Microsoft strategy for Windows 8 on tablets is designed to leverage an extensive ecosystem of hardware manufacturers that Microsoft has built over the years because of the ability for anyone to extend the OS by writing a device driver to support a new piece of hardware. Windows “Plug and Play” facilities for attaching devices has grown into a very sophisticated set of services, including ways for device driver software of tapping into Windows Error Reporting, for example.
 
 In general, device drivers are modules that also run in kernel mode and effectively serve as extensions to the OS. Their main purpose is managing hardware resources other than the CPU and memory. Windows device drivers are installed to manage any and all of the following devices:
  • Disks, CD, and DVD players/recorders that are attached using IDE, SCSI, SATA, or Fibre Channel adaptors
  • the network interface adaptors, both wired and wireless,
  • input devices such as the mouse, the keyboard, the touch screen, the video camera, and the microphone(s)
  • graphical output devices such as the video monitor
  • audio devices for sound output,
  • memory cards , thumb drives,
as well as pretty much any device that plugs into a USB port on your machine. In Windows 8, the list of device drivers expands to include a GPS and an Accelerometer.
 
 Windows currently provides an open “Plug-and-Play” model that permits virtually anyone to develop and install a device driver that extends the operating system. Figure 2 is a screen shot from a portable PC of mine showing the Device Manager applet in the Control Panel that tells you what Plug-and-Play hardware – and the device driver associated with that hardware – is installed. As you can see, it is quite a long list. This flexibility of the Windows platform is a major virtue.


Figure 2. The Device Manager applet in the Control Panel tells you what Plug-and-Play hardware is installed, along with information about the device driver software associated with that hardware.

For the sake of security, you want to ensure that any OS function that doesn’t absolutely need to run in kernel doesn’t. But, by their very nature, because they need to deal directly with hardware device dependencies, device drivers need to run in kernel mode. Device drivers in Windows don’t actually interface with the hardware directly – they use services from the HAL and the Windows IO Manager to do that. This mechanism allows device drivers to be written so that they can be portable across hardware platforms, too.  The importance of this is that, once Windows is ported to ARM-based SoC machines, you ought to be able to plug in virtually any device that you could into an Intel-architecture PC and it will run.
 
As a practical matter, Windows has a device driver certification process that the major manufacturers of peripheral hardware use. So, not every piece of hardware you can attach to a Windows 7 PC, like the one illustrated in Figure 2, will have immediate support for the Windows RT environment on ARM. Microsoft also wants hardware manufacturers to take the extra step of packaging their drivers into Windows Store apps.
 
The open, plug-and-play device driver model Windows uses permits an almost unlimited variety of device peripherals to be plugged in and extend your Windows machine. Consider printer drivers in Windows. Manufacturers like HP have developed very elaborate printer drivers that let you know when you’ve run out and ink and then try to nudge you into buying expensive ink cartridges from them online. In contrast, try to print a document using your iPad. Can’t do it, no device drivers.
 
This great virtue of the Windows OS can also be a curse. The disadvantage of the “open” model is that it is open to anyone to plug into and start running code with kernel mode privileges. Historically, whenever your program needed a function that required kernel mode privileges, you could develop a device driver module (a .sys module) and drop that into the OS, too.
 
Being open leads to problems with drivers that are less than stellar quality and also creates a potential security exposure. The fact is 3rd party device driver code, running in kernel mode, is also a major source of the problems that all too frequently cause Windows to hang, crash, or blue screen. It is often not Microsoft code that fails, so there isn’t much Microsoft can do about this – other than take the steps they already have, like the certification program, to try to improve the quality of 3rd party driver software. The fact that my device driver can be deployed on machines configured with such a wide variety of other hardware that my software may need to interact with greatly complicates the development and testing process. The diversity leads to complexity and that directly impacts the quality of the software. Bugs inevitably arise whenever my software encounters some new and unexpected set of circumstances.
 

Both a blessing and a curse

 
A good way to illustrate the advantages and the disadvantages of the Windows open hardware policy is to look at graphics cards for video monitors. The lightweight portable PC I am typing on at the moment has a 14” display, powered by a graphics chip made by Intel that is integrated on the motherboard. When I use this portable PC at my desk, I slide it into a docking station where two additional video monitors are attached, powered by a separate, higher end NVIDIA graphics card. (The docking station actually supports up to four external monitors, but I am pretty much out of desk space the way things are at the moment, so I will have to get back to you on that.)
 
One of the external flat panel displays is 1920 x 1200, the other is only 1920 x 1080. I have one positioned on the left of the portable and the other on the right. In addition, I have a 3rd party port replicator plugged into a USB port on the back of the PC. This device has additional video ports that I am currently not either. If you look at the Screen resolution applet in the Control Panel, my configuration looks like I have four video monitors available, not three.

See the screen shot in Figure 3.
 
Figure 3. The Screen Resolution on my portable PC when I plug into a docking station with additional video monitors attached. It shows four video monitors are attached, when physically, there are only three. The 4th is a phantom device that is detected on an additional port replicator (attached via a USB port) that supports additional video connections.


This desktop configuration has multiple external monitors augmenting the built-in portable display, which is “only” 1600 x 900. When you are doing software development, take my word for it, it helps to have as much screen real estate as possible. Visual Studio also has pretty good support for multiple monitors, and I have really come to rely on this feature. When coding or debugging, I can have multiple windows displaying code inside the VS Editor open and arrayed across these monitors at any one time. Having multiple monitors is a tremendous aid to developer productivity. One reason I purchased this portable PC was that it was lightweight for when I need to pack it and go. But, in fact, the primary reason I purchased this specific model was it came with the high end NVIDIA graphics adaptor so I could plug in two or more externals monitors when I am using at my desk.
 

I am very satisfied with the graphics configuration I have, but it is not exactly trouble-free, and I have had to learn to live with a few annoying glitches. For instance, when I swing the mouse across an arc from the monitor on the left to the monitor on the right, Windows will let the mouse go off the deep end & enter the “display” of the phantom 4th monitor where I can no longer see where it is. When I first drag a Visual Studio panel or window onto either one of the external monitors, there is evidently a bug in the graphics card adaptor code that stripes solid black rectangles across portions of the window. This bug is apparently WPF-related, because it doesn’t show up on any standard Windows Forms applications, like Office or IE. (One of the features of Windows Presentation Foundation is that provides direct access to high resolution rendering services on the graphics card, and this is supposed to be a good thing. For one thing, these higher end graphics cards are like high speed supercomputers when it comes to vector processing.) Fortunately, re-sizing the window immediately corrects the problem, so I have learned to live with that minor annoyance, too.
 
Occasionally, the graphics card has a hiccup, the screens all black out, and I have to wait a few seconds while the graphics card recovers and re-paints all the screens. Very infrequently, the graphics card does not recover; there is a blue screen of death that Windows 7 hides, and a re-boot.
 
Overall, as I said, I am pretty happy with this configuration, but it is certainly not free of minor glitches and occasionally succumbs to a major one. Understanding that my particular configuration of PC, its graphics adaptors, the docking stations, and the characteristics of the external monitors is singularly unique, I am resigned to the fact that NVIDIA is unlikely to ever fix my peculiar set of problems.
 
Windows has a remarkable automated problem reporting system that will go out on the web following a graphics card meltdown and try to match the “signature” of my catastrophic error to the fixes NVIDIA has made available recently to its “latest and greatest” version of its driver code to see if there is a solution to my problem that I can download and install. But, realistically, I don’t expect to ever see a fix for this set of problems. They are associated with a combination of hardware and software (adding Visual Studio’s use of WPF to the mix) that, if not exactly unique, is still pretty rare. Inside NVIDIA, any developer working to fix this set of bug reports would have difficulty reproducing them because their configurations won’t match mine. That, and the fact that there aren’t too many other customers reporting similar problems – again, because of the unique environment – means the bug report will be consigned to  a low priority, “No-Repro” bin where no one will ever work on it.
 
There is another way to go about this, which is Apple’s closed model. On Apple computers and devices, with few exceptions, the only peripherals that can be attached to a Mac are those branded by Apple and supported by device drivers that Apple itself supplies. To be fair, Apple is more open than it used to be. Since Apple switched over to Intel processors, the company has opened up the OS a little to 3rd party hardware, but it has not opened it up a whole lot. I can buy a MacBook Pro, for example, which is equipped with a middling NVIDIA graphics card and attach an external Apple Thunderbolt 27” display to it. The Thunderbolt is a beautiful video display, mind you, 2560 by 1600 pixels, but it costs $900. I can’t configure a 2nd external monitor without moving to one of the Apple desktop models.
 
However, and this is the key take-away from this rambling discussion, limiting the kind of monitors and the array of video configurations that the MacBook can support does lead to standardized configurations that Apple can insure are rigorously tested. And, this leads to extremely high quality, which means customers running an iMac do not have endure the kinds of glitches and hiccups that Windows customers grow accustomed to. On Windows, there is support for a significantly broader array of configuration options, but Microsoft cannot deliver quite the same level of uniformly high quality to that support. Using its open model that permits virtually any third party hardware manufacturer to plug their device into Windows effectively means that Microsoft has farmed out some of the most rigorous requirements for quality control in Windows to third parties.

Open vs. Closed hardware models

The flexibility of the open model used in Windows certainly has its virtues, as I have discussed. It makes good business sense for Microsoft executives to try to take advantage of the flexibility of the Windows platform and leverage the range and types of hardware that Windows can support, compared to an Apple PC or tablet in its Windows 8 challenge to the iPad.
 
The Windows organization in Microsoft is certainly aware that the high level of quality control that Apple maintains by restricting the options available to the consumer can be a significant, strategic advantage. Each release of Windows features improvements to the device driver development process to help 3rd party developers. The Windows organization performs extensive testing using popular 3rd party hardware and software in its own labs. Microsoft also provides most of the driver software you need in Windows when you first install it. However, a good deal of this responsibility for quality is farmed out to its OEM customers – the PC manufacturers – who need to ensure you have up-to-date video drivers and other drivers for the specific hardware they include in the box.
 
Microsoft has also made an enormous investment in automated error reporting and fix tracking associated with the Windows Update facility, which is very impressive. IT organizations often disable Windows Update because they fear the unknown, but its capabilities are actually quite remarkable. (There is a good description of Windows automated error reporting and the Windows Update facility in an article published last year in the Communications of the ACM.) Windows gives third parties access to its bug databases, and the Windows organization will proactively pursue getting a fix out to third party software, if it is affecting an appreciable number of customers. A staggering number of customers run Windows, however, well over a billion licensed copies exist, so that still leaves customers like me with relatively minor glitches associated with relatively unusual configurations with little hope of relief. I am not saying it is impossible that I will ever see a version of the NVIDIA driver that fixes the problems I experience, but I am not holding my breath.
 
 Battery life on portables is a good example where, despite considerable efforts from Microsoft to support the device driver community, Apple has a distinct technical advantage. Now that Macs are running the same Intel hardware as Windows PCs, Apple hardware has no inherent advantage when it comes to battery life. Running on similar sets of hardware, Apple machines typically run about 25% longer on the same battery charge. Most of this advantage is due to the control that Apple exercises over all aspects of the quality of the OS, the hardware, and the hardware driver software that it delivers. (Some of it is due to shortcomings in Windows software, specifically system and driver routines that wake up periodically from time to time to look around for work. One of the culprits is the CPU accounting routine that wakes up 64 times a second to sample the state of the processor. Hopefully, this behavior has been has been removed in Windows 8, but I suspect it hasn’t.) In contrast, Microsoft has to periodically orchestrate battery life-saving initiatives across a broad range of 3rd party device driver developers, which is akin to herding cats.
 
Microsoft’s decision to build and distribute its own branded tablet, the new Surface, does reflect an understanding at the highest levels of the company that the Apple products that Microsoft must compete with have a distinct edge in quality, compared to the products from many of its major Windows OEM suppliers. I have heard Steve Ballmer in department-level meetings discuss his reluctance to abandon the “open” and cooperative business model that has served the company so well for so long. It is a business model that definitely leads to a more choice among products across the OEM suppliers and lower prices to consumers because of the competition among those suppliers.
 
 
lt is also a business model that has forced Microsoft’s Windows OEM customers to live for years with meager profit margins in a cutthroat business, high volume, low margin, capital-intensive, with little room for error. Meanwhile, Microsoft has consistently raked in most of the cream right off the top of that market in software license fees for Windows and Office that it collects directly from those same OEMs. Microsoft’s high-handed behavior led IBM to exit the PC hardware market long ago. HP, which has struggled for years to make a profit in the same line of business, would also like to exit the business, but its management still has the albatross of the Compaq acquisition around its neck, constraining its ability to shed an asset that cost the company dearly to acquire. The problems Microsoft’s OEM partners face are obvious – an Intel “Ultrabook” configured as an iMac Air runs $1200 this Christmas, while identical hardware from HP that runs Windows retails for 40% less. The margins Apple is able to command for its hardware products are the envy of the tech industry.
 
By getting its support for tablets into consumer-oriented sales channels in time for the Christmas rush, Microsoft is hoping Win 8 can make a dent in the huge lead Apple has fashioned in the emerging market for tablets. Meanwhile, at least in the short term, sales of the new Microsoft Surface are going to be restricted to Microsoft’s direct sales outlet, currently numbering only about 60 stores. (Plus, you can order it direct from the Microsoft Store over the web. Currently, Microsoft is forecasting about a 3-week delay before it can ship you one.) With Windows OEMs primarily pushing a variety of Windows 8 machines running AMD and Intel processors, Christmas shoppers are bound to be to be confused with all the choices available: AMD vs. Intel, Intel iCore vs Intel Pentium, and the Microsoft Surface on ARM. It is all a little overwhelming to the average consumer and just wants something little Timmy can use for school.
 
 

Back to the future

 
All of which brings us full circle back to Windows RT because the new Surface tablet can only run applications that use Windows RT. In brief, Windows RT is a new API layer in the OS that ships with every version of Windows 8, including Windows Server 2012. (“RT” stands for “run-time.”) If you buy one of the new ARM-based tablets (or phones when Windows 8 phones start to ship), these devices come with RT installed, omitting many of the older pieces of Windows that Microsoft figures you won’t ever need on a tablet or a phone.
 
As Figure 1 illustrates, this new API layer sits atop the existing Win32 APIs, which I have heard Windows developers discuss consists of some 300,000 different methods. As illustrated, Window RT does not come close to encompassing the full range of OS and related services that are available to the Windows developer. Microsoft understood that it could not attempt to re-write 300,000 methods in the scope of a single release, so Windows RT should be considered a work in progress. What Microsoft tried to accomplish for Windows 8 was to provide enough coverage with the first release of Win RT that developers would be capable of quickly producing the kinds of apps that have proved popular on the iPhone and the iPad. As shown in the drawing, Windows Store apps also can make certain specific Win32 API calls that were not fully retrofitted into the new Windows Runtime.

Summing up.

 
In general, I am certain that porting the Windows OS to the ARM platform for Windows 8 was an excellent decision that should breathe some new life into the Microsoft PC business. ARM processors have evolved into extremely powerful computing devices – quad-core is already here & 64-bit ARM is on the way, for example. Portable, touch-screen tablets are a very desirable form factor. I have never seen a happier bunch of computer users than iPhone stalwarts chatting up Siri. Windows needed to try to catch up and perhaps even leap frog Apple before its lead in portable computing became insurmountable.
 
When Windows 8 was in the planning stages, the Windows Phone OS, which was adapted from Windows CE, was already running on ARM. At the time, there were at least two other major R&D efforts inside Microsoft that were also targeting the ARM platform. The Windows organization, led by Steve Sinofsky, effectively steamrollered those competing visions of the future of the OS when it started to build Windows 8 in earnest. And, for the record, I don’t have a problem with Sinofsky’s autocratic approach to crafting software. Design by committee slowly and inevitably takes its toll, weakening the power and scope of a truly visionary architect’s design breakthrough.
 
One of the crucial areas to watch as Windows 8 takes hold and Microsoft begins development of the next version of Windows is whether or not Windows on devices can keep up with rapidly evolving hardware. Microsoft needs to figure out how to rev Windows on devices much more frequently than it does the rest of the OS. That will be an interesting challenge for an extremely complicated piece of software that needs to support such a wide range of computers, from handhelds to rack-mounted, multi-core blade servers.
 
As delivered, I also believe the vision for Windows 8 suffers from serious flaws. The most noticeable one is the decision to make the new touch screen-oriented UI primary even on machines that don’t have touch-enabled screens. This “one size fits all” strategy condemns many, many Windows customers to struggle to adapt to an inappropriate user interface.
 
Moreover, from the standpoint of a Windows application developer, I am less than enamored with some of the architectural decisions associated with the new Windows Runtime API. These were based on a profound misunderstanding inside the Windows organization about why software developers chose to target Windows development in the first place (going back 20 years or so in the life of the company) and why these same developers are targeting Apple iPhones and iPads today.
 
I will defer the bulk of that discussion to the next blog entry on Windows 8.


Comments

  1. I'm sorry but you dilute your essential points by getting many of your details about Apple wrong. These details matter because they provide evidence that there is more than one way to do something, that MS does not HAVE to do things the way it has always done them.

    In particular
    - You have been able to print from iPad (and iPhone) since iOS 4.2 in 2010

    - Retina Macbook Pro supports both miniDisplayPort and HDMI output (and the miniDisplayPort does not require an Apple monitor, and it can easily be routed to DVU or even, god forbid, VGA). If you ARE willing to pay for a Thunderbolt display, you can hook up simultaneously 3 monitors (plus the internal screen).
    With OSX 10.9 there'll be even a 4th choice, with the ability to AirPlay a window to a monitor hooked up to an Apple TV or some other AirPlay receiver.

    ReplyDelete

Post a Comment

Popular posts from this blog

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead...

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe...

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profi...