Welcome to a blog devoted to Windows performance, application responsiveness and scalability, software performance engineering (SPE), and related topics in computer performance.
My name is Mark B. Friedman. I am a professional software developer, author of several popular software products over the years, many of them tools used in computer performance analysis and capacity planning.
I have chosen “Performance By Design” as a title for this blog. This is partially an homage to one of the best books I know on software performance engineering, “Performance By Design,” written by Daniel Menascé and his colleagues. If you follow the link provided, you will see that I have given the book a well-deserved five-star review on Amazon.
I admire Dr. Menascé’s book. I aspire to be able to write as succinctly and thoughtfully on the same topics. I also thoroughly like the title. The clear implication of the phrase “performance by design” is that acceptable levels of application performance don’t just happen; it only comes about through conscious decision-making and intentional engineering that begins in the design phases of application development, but proceeds through development and testing, QA and stress testing, and ultimately into production.
The phrase “performance by design” also reminds of something my old colleague, Dave Halbig, one of the best performance engineers I’ve ever met, used to tell our IT executives back when we worked together at MCI Telecommunications in the 1980s. Dave, who grew up in the Detroit area and is steeped in car culture, would say, “Performance isn’t a coat of paint I can slap on the application at the end of its development phase to make it go faster.” No, indeed. Performance is only achieved as a by-product of a conscious engineering policies and practices that
· Sets performance targets for key scenarios early in the design phase,
· Creates a verifiable responsiveness and scalability model of the application under design,
· Instruments the application so that its responsiveness can be measured,
· Builds and runs performance tests to gather results throughout all phases of the development and testing process, also which also serves to verify that the assumptions of the underlying scalability model are correct, and
· Continues to measure the performance of the application in production once it is deployed.
These are all pillars of an intentional engineering approach known as software performance engineering (SPE), a term originally coined by Dr. Connie Smith (and, I might as well plug her excellent book “Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software ” while I am at it).
My perspective on the discipline of software performance engineering is informed by a long career as a software developer, working principally on performance measurement and analysis tools. Early in my career I focused on enterprise-scale applications running on IBM mainframes, but I began to switch over to the Microsoft Windows NT platform in the early 90s. About five years ago, I was recruited to join a relatively new performance engineering team being formed in the Developer Division at Microsoft. The team’s mission was to incorporate performance engineering Best Practices into the processes used internally to develop the products the Developer Division builds for customers (mainly Visual Studio and the .NET Framework). The ultimate goal was integrate best practices in performance engineering into the Visual Studio products themselves so that our customers building applications to run on the Windows platform would also benefit.
Alas, the management commitment to do this work was noticeably absent, and I recently left Microsoft after 4+ years in the Developer Division, disappointed at the progress I was able to make towards either goal. I continue to believe that Microsoft and its Developer Division has assembled most of the pieces necessary to incorporate performance engineering into their software development life cycle tools. Something along the lines of Murray Woodside’s fine article that lays out an ambitious agenda for “The Future of Performance Engineering” was eminently achievable, IMHO. Back in 2008 while I was at Microsoft, I started a team blog that is located at http://blogs.msdn.com/b/ddperf/. I described an ambitious performance engineering agenda in my very first Microsoft blog entry.
Well, I tried, and while I did not succeed there, I am not sorry that I made the effort.
I look on this new blog as both a continuation of the older one and an enlargement. I expect you will see many of the same sentiments expressed in my Microsoft blog being echoed here in this new blog. (I am nothing, if not consistent.) I will may be able to range a little further afield sometimes than I felt was consistent with my official position in the Developer Division at Microsoft. (I should clarify that I found Microsoft’s policy regarding blogging in public to be remarkably open. Nothing that I wrote for public consumption was ever subject to review or censorship in any form.) For instance, I feel obliged to comment on a recent article in CACM entitled “Thinking clearly about performance, part 2” by Cary Milsap that discusses whether or not there is a ‘knee” to the typical response time curve under the impact of queuing. (I will add my two cents to the discussion in an upcoming post.)
Thank you for coming along on the next leg of my professional journey. I will try to make the ride as entertaining and informative as possible.
n Mark Friedman