Skip to main content

Real User Measurements (RUM): whyis this web app running slowly, Part 9

This is a continuation of a series of blog entries on this topic. The series starts herehttp://computerperformancebydesign.com/why-is-my-web-app-running-slowly-part-1/

Real User Measurements (RUM)


The other way to obtain client-side Page Load time measurements – and one that is becoming increasingly popular – is a tool that measures page load time from inside the browser. The web browser-based performance tools we have looked at like ChromeSpeed or the corresponding Network tab in the Internet Explorer Developer Tools measure page load time from inside the web client. But, as we saw, those performance tools function like YSlow, requiring you to have direct (or remote) access to the web client. Real User Measurements refer to measurements of how long it took to access your web site acquired from inside the web browser running on your customers’ machines and operated directly by them. Those are the real users whose experience with our web sites we want to capture and understand.

There are two important aspects of gathering client-side Page Load time measurements: (1) obtaining the measurements, of course, and, crucially, (2) figuring out a way to send the measurements from the machine where the browser is running back to the data gatherer. Conceptually, a piece of JavaScript code that subscribes to both the DOM’s initial window.unload event and the window.load event can gather the necessary timings. (In practice, this is a bit more complicated because you actually need two pieces of JavaScript code to execute, one to get the time the window.unload event fired in the previous browser window, one to get the load time for the current window, and a mechanism to pass the time of the unload forward to the next window to calculate an interval delta.) Once it is acquired, the timing data can then be transported from the browser using a web beacon sent to a designated location. An example of a JavaScript-based tool that does precisely this is Google SiteSpeed, which is part of its Google Analytics suite of tools. Googel SiteSpeed gathers web application timing data and then sends the Page Load Time measurements back to a Google Analytics data center for collation and analysis.

Real User Measurements of web application response time, gathered using JavaScript and forwarded to some Host web site for analysis, contrast with an earlier technique entailing a monitoring service that is paid to generate synthetic Requests to your web site and measure their web response times. That first generation of commercially available web performance tools measured end-to-end response times for web requests by generating requests to your web application from inside their application, using a script to supply access parameters like a user name and password, for example. Periodically, these monitoring services poll your web site, generating and then transmitting synthetic GET Requests. By simulating actual customers exercising your web apps, the service is able to monitor the availability of the web site and measure its responsiveness. These monitoring services remain in use among companies doing business on the web, available from suppliers like Keynote and others.  

Currently, measurements based on the timing of synthetic web requests are beginning to be superseded by Real User Measurements, or RUM for short, which access timing information directly from inside the web browser. Now that there is a standard interface, known as the Navigation/Timing API, adopted in 2012, it is markedly easier to gather web client response time data directly from your web site’s customers.

This timing data, the Real User Measurements, are accessible using a JavaScript code snippet that can be embedded in the page’s HTML. To illustrate this approach, the snippet of JavaScript code in Listing 1 adds a handler function named CalculateLoadTime to run when the DOM’s window.load event fires. It then uses the built-in window.performance object to retrieve a high resolution clock value representing the current time and a timer value set when the original GET Request was issued. The difference between the two timer values is the page load time measurement. The built-in performance object that the script accesses automatically provides the timing data that the script uses to calculate Page Load time. I will explore the properties of the performance object and explore its uses in more detail in a moment.



<html>
<head>
<script type="text/javascript">
// add load event listener.
window.addeventlistener("load", Calculateloadtime, false);
function Calculateloadtime() {
    // get current time.
    var now = window.performance.now();
    // calculate page load time.
    var page_load_time = now - performance.timing.navigationstart;
    // write the load time to the f12 console.
    if (window.console) console.log(page_load_time);
}
</script>
</head><body>
<!- main page body is here. --> </body>
 </html>

Listing 1. Accessing the DOM’s performance object in a Load event handler.


Developers with some experience using JavaScript to manipulate the DOM will recognize that you would not want to incorporate this code snippet as is directly into your web page because the window.onload event handler is probably already overwritten in order to execute a piece of initialization script code after the browser has resolved all of the page’s external references. In that event, to gather the timing data, you could simply copy the CalculateLoadTime() function body and paste into an existing window.onload event handler. To get the most accurate measurement possible, you would want the timing code to execute at the end of the window.onload event handler routine, of course. As discussed in earlier posts, it is not unusual for the window.onload event handler to perform a significant amount of DOM manipulation.

Once the internal web client timing data is gathered, it can then be transmitted to an external web site using a web beacon, which is a GET Request for a trivial HTTP object (often a Request for a one-pixel transparent .gif) issued from JavaScript code where the subsequent Response message is designed to be thrown away. The web beacon Request message contains the payload. The GET Request is fashioned so that it appends a set of parameters to the Request message that contain data of interest. Web beacons are widely used by web analytics programs like Google’s or New Relic’s to get data on web browser activity from the web client back to their data centers for processing and analysis. The data in the beacon payload is typically about usage of the current web page. 

Among the popular tools that use this measurement technique is Google Analytics, a tool that Google supplies free of charge to smaller sites that captures and reports on many kinds of web page usage statistics, including the SiteSpeed timing data that measures Page Load time. Google Analytics introduced the SiteSpeed measurements in 2010. The SiteSpeed data is sampled by default, by the way. Unless you change the Google Analytics defaults (see documentation on the _setSiteSpeedSampleRate() for details), SiteSpeed data is sampled at a 1% rate. There is even a free Google Analytics plug-in for WordPress-based web sites that makes it very easy for small and medium-sized businesses with limited technical resources to start utilizing the company’s web analytics reporting suite.

To help make gathering the SiteSpeed measurement data more reliable, Google put its weight behind a proposal to the W3C, the standards body responsible for all web technology and protocols, to add a standard Navigation Timing object to the DOM and then implemented that API in Chrome. Shortly after the Navigation Timing API was submitted to the W3C by Google, Microsoft, Yahoo and others, it was adopted by all the major web clients, including Chrome, Internet Explorer, Firefox, Safari and Opera. (For details on support of the API in different browsers, see http://caniuse.com/nav-timing.) 

In the next post, I will take a closer look at the Navigation Timing API. 

Comments

Popular posts from this blog

Monitoring SQL Server: the OS Wait stats DMV

This is the 2nd post in a series on SQL Server performance monitoring, emphasizing the use of key Dynamic Management View. The series starts here : OS Waits  The consensus among SQL Server performance experts is that the best place to start looking for performance problems is the OS Wait stats from the sys.dm_os_wait_stats DMV. Whenever it is running, the SQL Server database Engine dispatches worker threads from a queue of ready tasks that it services in a round-robin fashion. (There is evidently some ordering of the queue based on priority –background tasks with lower priority that defer to foreground tasks with higher priority.) The engine records the specific wait reason for each task waiting for service in the queue and also accumulates the Wait Time (in milliseconds) for each Wait reason. These Waits and Wait Time statistics accumulate at the database level and reported via the sys.dm_os_wait_stats DMV. Issuing a Query like the following on one of my SQL Server test mac

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profiling tools that exploit

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe