Skip to main content

Performance By Design: Welcome

Welcome to a blog devoted to Windows performance, application responsiveness and scalability, software performance engineering (SPE), and related topics in computer performance.
My name is Mark B. Friedman. I am a professional software developer, author of several popular software products over the years, many of them tools used in computer performance analysis and capacity planning.
I have chosen “Performance By Design” as a title for this blog. This is partially an homage to one of the best books I know on software performance engineering, “Performance By Design,” written by Daniel Menascé and his colleagues. If you follow the link provided, you will see that I have given the book a well-deserved five-star review on Amazon.
I admire Dr. Menascé’s book. I aspire to be able to write as succinctly and thoughtfully on the same topics. I also thoroughly like the title. The clear implication of the phrase “performance by design” is that acceptable levels of application performance don’t just happen; it only comes about through conscious decision-making and intentional engineering that begins in the design phases of application development, but proceeds through development and testing, QA and stress testing, and ultimately into production.
The phrase “performance by design” also reminds of something my old colleague, Dave Halbig, one of the best performance engineers I’ve ever met, used to tell our IT executives back when we worked together at MCI Telecommunications in the 1980s. Dave, who grew up in the Detroit area and is steeped in car culture, would say, “Performance isn’t a coat of paint I can slap on the application at the end of its development phase to make it go faster.” No, indeed. Performance is only achieved as a by-product of a conscious engineering policies and practices that
·         Sets performance targets for key scenarios early in the design phase,
·         Creates a verifiable responsiveness and scalability model of the application under design,
·         Instruments the application so that its responsiveness can be measured,
·         Builds and runs performance tests to gather results throughout all phases of the development and testing process, also which also serves to verify that the assumptions of the underlying scalability model are correct, and
·         Continues to measure the performance of the application in production once it is deployed.
These are all pillars of an intentional engineering approach known as software performance engineering (SPE), a term originally coined by Dr. Connie Smith (and, I might as well plug her excellent book “Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software ” while I am at it).
My perspective on the discipline of software performance engineering is informed by a long career as a software developer, working principally on performance measurement and analysis tools. Early in my career I focused on enterprise-scale applications running on IBM mainframes, but I began to switch over to the Microsoft Windows NT platform in the early 90s. About five years ago, I was recruited to join a relatively new performance engineering team being formed in the Developer Division at Microsoft. The team’s mission was to incorporate performance engineering Best Practices into the processes used internally to develop the products the Developer Division builds for customers (mainly Visual Studio and the .NET Framework). The ultimate goal was integrate best practices in performance engineering into the Visual Studio products themselves so that our customers building applications to run on the Windows platform would also benefit.
Alas, the management commitment to do this work was noticeably absent, and I recently left Microsoft after 4+ years in the Developer Division, disappointed at the progress I was able to make towards either goal. I continue to believe that Microsoft and its Developer Division has assembled most of the pieces necessary to incorporate performance engineering into their software development life cycle tools. Something along the lines of Murray Woodside’s fine article that lays out an ambitious agenda for “The Future of Performance Engineering” was eminently achievable, IMHO. Back in 2008 while I was at Microsoft, I started a team blog that is located at http://blogs.msdn.com/b/ddperf/. I described an ambitious performance engineering agenda in my very first Microsoft blog entry.
Well, I tried, and while I did not succeed there, I am not sorry that I made the effort.
I look on this new blog as both a continuation of the older one and an enlargement. I expect you will see many of the same sentiments expressed in my Microsoft blog being echoed here in this new blog. (I am nothing, if not consistent.) I will may be able to range a little further afield sometimes than I felt was consistent with my official position in the Developer Division at Microsoft. (I should clarify that I found Microsoft’s policy regarding blogging in public to be remarkably open. Nothing that I wrote for public consumption was ever subject to review or censorship in any form.) For instance, I feel obliged to comment on a recent article in CACM entitled “Thinking clearly about performance, part 2” by Cary Milsap that discusses whether or not there is a ‘knee” to the typical response time curve under the impact of queuing. (I will add my two cents to the discussion in an upcoming post.)
Thank you for coming along on the next leg of my professional journey. I will try to make the ride as entertaining and informative as possible.
n  Mark Friedman

Comments

  1. Congratulations, Mark! Looking forward to following your blog!

    ReplyDelete

Post a Comment

Popular posts from this blog

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed in the previous post, there are few professional-grade, application response time monitoring and profiling tools that exploit the …

Why is my web app running slowly? -- Part 1.

This series of blog posts picks up on a topic I made mention of earlier, namely scalability models, where I wrote about how implicit models of application scalability often impact the kinds of performance tests that are devised to evaluate the performance of an application. As discussed in that earlier blog post, sometimes the influence of the underlying scalability model is subtle, often because the scalability model itself is implicit. In the context of performance testing, my experience is that it can be very useful to render the application’s performance and scalability model explicitly. At the very least, making your assumptions explicit opens them to scrutiny, allowing questions to be asked about their validity, for example.
The example I used in that earlier discussion was the scalability model implicit when employing stress test tools like HP LoadRunner and Soasta CloudTest against a web-based application. Load testing by successively increasing the arrival rate of customer r…

Virtual memory management in VMware: memory ballooning

This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is here.


Ballooning
Ballooning is a complicated topic, so bear with me if this post is much longer than the previous ones in this series.

As described earlier, VMware installs a balloon driver inside the guest OS and signals the driver to begin to “inflate” when it begins to encounter contention for machine memory, defined as the amount of free machine memory available for new guest machine allocation requests dropping below 6%. In the benchmark example I am discussing here, the Memory Usage counter rose to 98% allocation levels and remained there for duration of the test while all four virtual guest machines were active.

Figure 7, which shows the guest machine Memory Granted counter for each guest, with an overlay showing the value of the Memory State counter reported at the end of each one-minute measurement interval, should help to clarify the state of VMware memory-managemen…