Skip to main content

Complications facing the YSlow scalability model. (Why is this web app running slowly? Part 5)

This is a continuation of a series of blog entries on this topic. The series starts here.

With the emergence of the Web 2.0 standards that enabled building dynamic HTML pages, web applications grew considerably more complicated, compared to the static view of the page composition process that the YSlow scalability model embodies. Let’s review some of the more important factors that complicate the (deliberately) simple YSlow model of Page Load Time. These complications include the following:

  • YSLow does not attempt to assess JavaScript execution time, something that is becoming more important as more and more complicated JavaScript is developed and deployed. JavaScript execution time, of course, can vary based on which path through the code is actually executed for a given request, so the delay can vary, depending on the scenario requested. The processing capacity of the underlying hardware where the web client that executes the script resides is also a factor. This is especially important when the web browser client is running inside a cell phone, since apps on cells phones are often subject to significant capacity constraints, including battery life, RAM capacity (which impacts cache effectiveness), and lack of external disk storage.
In addition to the variability in script execution time that is due to differences in the underlying hardware platform, the execution time of your web page’s JavaScript code can potentially vary based on the specific path a scenario takes through the code. Both of these factors suggest adapting conventional performance profiling tools for use with JavaScript executing inside the web browser. Using the tools at webpagetest.org, it is also possible to measure when the web page is "visually complete," which is computed by taking successive video captures until the video image stabilizes. 
  • Web browser provide multiple sessions such that static content, where possible, can be downloaded in parallel. Current web browsers can create 4-8 TCP sessions per host for parallel processing, depending on the browser. Souders’ blog entry here compares the number of parallel sessions that are created in each of the major web browsers, but, unfortunately, that information is liable to be dated. These sessions persist while the browser is in communication with the web server, so there is also a savings in TCP connection processing whenever connections are re-used for multiple requests. (TCP connections require a designated sequence of SN, SYN-ACK and ACK packets to be exchanged by the Sender and Receiver machines prior to any application-oriented HTTP requests being transmitted over the link.) Clearly, parallel TCP sessions play havoc with the simple serial model for page load time expressed in Equations 3 & 4. 

Parallel browser sessions potentially so improve download performance that web performance experts like Souders have reassessed the importance of the YSlow rule recommending having fewer static HTTP objects to load. Another browser-dependent complication is that some of the browsers block parallel sessions in order to download JavaScript files. Since script code, when it executes, is likely to modify the DOM in a variety of ways, for the sake of integrity it is better to perform this operation serially. This behavior of the browser underlies the recommendation to reference external JavaScript files towards the end of the page’s HTML markup, when that is feasible, of course, which maximizes the benefit of parallel downloads for most pages.

The fact that the amount of JavaScript code embedded in web pages is on the rise, plus the potential for JavaScript downloads to block parallel downloading of other types of HTTP objects, has Souders and others recommending asynchronous loading of JavaScript modules. For example, Souders writes, “Increasing parallel downloads makes pages load faster, which is why [serial] downloading external scripts (.js files) is so painful.”[1] As a result, Souders recommends using asynchronous techniques for downloading external JavaScript files, including using the ControlJS library functions to control how your scripts are loaded by the browser. (See also http://friendlybit.com/js/lazy-loading-asyncronous-javascript/ for native JavaScript code examples.)

The basic technique defers loading external JavaScript until after the DOM’s window.onload event has fired by adding some code to the event handler to perform the script file downloads at that point in time. The firing of the window.onload event also marks the end of Page Load Time measurements. Using this or similar techniques, measurements of Page Load time can improve without actually improving the user experience, especially when the web page is not fully functional until after these scripts files are both downloaded and executed. It is not uncommon for one JavaScript code snippet to conditionally request the load of another script, something that also causes the browser to proceed serially. Moreover, the JavaScript execution time can result in considerable delay. Suffice to say that optimizing the performance of the JavaScript associated with the page is a topic for an entire book.
  • Equation #3 showing the YSlow scalability model provides a single term for the round trip time (RTT) for HTTP requests when round trip time is actually more accurately represented as an average RTT where the underlying distribution of round trip times is often non-uniform. Some of the factors that cause variability in round trip time include:
    •  content is often fetched from multiple web servers residing at different physical locations. Locating different web servers initially requires separate DNS look-ups to acquire the IP address of each server and establishing a TCP connection with that server, 
    • content may be cached at the local machine via the browser cache or may be resident at a node physically closer to the web client if the web site uses a caching engine or Content Delivery Network (CDN) like Akamai.
In general, caching leads to a highly non-uniform distribution of round trip times. Those Http objects that can be cached effectively exhibit one set of round trip times, based on where the cache hit is resolved (local disk or CDN), while objects that require network access yield a different distribution. A pattern that is encountered frequently is an HTML reference to a third party Ad server, which seems like it is often the last and slowest HTML reference to be resolved. The Ad servers from Google, Amazon and others not only know who you are and where you are (in the case of a phone that is equipped with GPS), they also have access to your recent web browser activity so they are likely to have some knowledge of what advertising content you might be interested in. Figuring out just what is the best ad to serve up to you at any point time may necessitate a substantial amount of processing back at that 3rd party ad server site, all of which delays the generation of the Response message the web page is waiting on.

  • Many web sites rely on asynchronous operations, including AJAX techniques, to download content without blocking the UI. AJAX is an acronym that stands for Asynchronous JavaScript and XML, and it is a collection of techniques for making asynchronous calls to web services, instead of synchronous HTTP GET and POST Requests. Since the tool does not attempt to execute any of the web page’s JavaScript, any use of AJAX techniques on the page is opaque to YSlow.

AJAX refers to the capability of JavaScript code executing in the browser to issue asynchronous XMLHttpRequest methods calls to web services, acquire content, and update the DOM, all while the web page remains in its loaded state. AJAX techniques are designed to try to hide network latency and are sometimes used to give the web page interactive features that mimic desktop applications. A popular example of AJAX techniques in action are textboxes with an auto-completion capability, familiar from Search engines and elsewhere. As you start to type in the textbox, the web application prompts you to complete the phrase you are typing automatically, which says you some keystrokes.

A typical autocompletion textbox works as follows. As the customer starts to type into a textbox, a snippet of JavaScript code grabs the first few keyboard input characters and passes that partial string of data to a web service. The web service processes the string, performs a table or database look-up to find the best matches against the entry data, which it returns to the web client. In a JavaScript callback routine that is executed when the web service sends the reply, another snippet of script code executes to display to modify the DOM and display the results returned by the web service. These AJAX callback routines typically do update the DOM, but the asynchronous aspect of the XMLHttpRequest means the state of the page remains loaded and the all the input controls on the page remain in the ready state. To be effective, both the client script and the web service processing the asynchronous call must handle the work very quickly to preserve the illusion of rapid interactive response. To ensure that the asynchronous process is executed smoothly, both the XMLHttpRequest and the web service response messages should be small enough to fit into single network packets, for example. 

Nothing on this short list of its major deficiencies diminishes the benefits of using the YSlow tool to learn about why your web page might be taking too long to load. These limitations of the YSlow approach to improve web page responsiveness do provide motivation for tools to augment YSlow by actually measuring and reporting page load time. I will take a look at those kinds of web application performance tools next.




Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profiling tools that exploit

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe