Skip to main content

Performance By Design: Welcome

Welcome to a blog devoted to Windows performance, application responsiveness and scalability, software performance engineering (SPE), and related topics in computer performance.
My name is Mark B. Friedman. I am a professional software developer, author of several popular software products over the years, many of them tools used in computer performance analysis and capacity planning.
I have chosen “Performance By Design” as a title for this blog. This is partially an homage to one of the best books I know on software performance engineering, “Performance By Design,” written by Daniel Menascé and his colleagues. If you follow the link provided, you will see that I have given the book a well-deserved five-star review on Amazon.
I admire Dr. Menascé’s book. I aspire to be able to write as succinctly and thoughtfully on the same topics. I also thoroughly like the title. The clear implication of the phrase “performance by design” is that acceptable levels of application performance don’t just happen; it only comes about through conscious decision-making and intentional engineering that begins in the design phases of application development, but proceeds through development and testing, QA and stress testing, and ultimately into production.
The phrase “performance by design” also reminds of something my old colleague, Dave Halbig, one of the best performance engineers I’ve ever met, used to tell our IT executives back when we worked together at MCI Telecommunications in the 1980s. Dave, who grew up in the Detroit area and is steeped in car culture, would say, “Performance isn’t a coat of paint I can slap on the application at the end of its development phase to make it go faster.” No, indeed. Performance is only achieved as a by-product of a conscious engineering policies and practices that
·         Sets performance targets for key scenarios early in the design phase,
·         Creates a verifiable responsiveness and scalability model of the application under design,
·         Instruments the application so that its responsiveness can be measured,
·         Builds and runs performance tests to gather results throughout all phases of the development and testing process, also which also serves to verify that the assumptions of the underlying scalability model are correct, and
·         Continues to measure the performance of the application in production once it is deployed.
These are all pillars of an intentional engineering approach known as software performance engineering (SPE), a term originally coined by Dr. Connie Smith (and, I might as well plug her excellent book “Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software ” while I am at it).
My perspective on the discipline of software performance engineering is informed by a long career as a software developer, working principally on performance measurement and analysis tools. Early in my career I focused on enterprise-scale applications running on IBM mainframes, but I began to switch over to the Microsoft Windows NT platform in the early 90s. About five years ago, I was recruited to join a relatively new performance engineering team being formed in the Developer Division at Microsoft. The team’s mission was to incorporate performance engineering Best Practices into the processes used internally to develop the products the Developer Division builds for customers (mainly Visual Studio and the .NET Framework). The ultimate goal was integrate best practices in performance engineering into the Visual Studio products themselves so that our customers building applications to run on the Windows platform would also benefit.
Alas, the management commitment to do this work was noticeably absent, and I recently left Microsoft after 4+ years in the Developer Division, disappointed at the progress I was able to make towards either goal. I continue to believe that Microsoft and its Developer Division has assembled most of the pieces necessary to incorporate performance engineering into their software development life cycle tools. Something along the lines of Murray Woodside’s fine article that lays out an ambitious agenda for “The Future of Performance Engineering” was eminently achievable, IMHO. Back in 2008 while I was at Microsoft, I started a team blog that is located at http://blogs.msdn.com/b/ddperf/. I described an ambitious performance engineering agenda in my very first Microsoft blog entry.
Well, I tried, and while I did not succeed there, I am not sorry that I made the effort.
I look on this new blog as both a continuation of the older one and an enlargement. I expect you will see many of the same sentiments expressed in my Microsoft blog being echoed here in this new blog. (I am nothing, if not consistent.) I will may be able to range a little further afield sometimes than I felt was consistent with my official position in the Developer Division at Microsoft. (I should clarify that I found Microsoft’s policy regarding blogging in public to be remarkably open. Nothing that I wrote for public consumption was ever subject to review or censorship in any form.) For instance, I feel obliged to comment on a recent article in CACM entitled “Thinking clearly about performance, part 2” by Cary Milsap that discusses whether or not there is a ‘knee” to the typical response time curve under the impact of queuing. (I will add my two cents to the discussion in an upcoming post.)
Thank you for coming along on the next leg of my professional journey. I will try to make the ride as entertaining and informative as possible.
n  Mark Friedman

Comments

  1. Congratulations, Mark! Looking forward to following your blog!

    ReplyDelete

Post a Comment

Popular posts from this blog

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead...

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe...

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profi...