AppMetrics® Load Diagnostics
Showing you your Web Application's true Load Performance Curve
The performance of Web applications is usually tested in the controlled environment of the test Lab. However, their real test comes when they are exposed to that ‘cloud’ of users, who never seem to follow the test Lab ‘rules’. No matter how much testing you do in the lab, conditions in the production real cloud world are totally different - server clusters, routers, firewalls and provider service level issues come into play that simply cannot be duplicated in a lab setting. The view from the ‘cloud’ is definitely not the same as the view from inside your test lab!
AppMetrics Load Diagnosticssm provides a different solution from the classic testing methodology. It combines an external, on-demand network of agent computers with sophisticated server-side monitoring to duplicate what happens in the "real world" rather than the simulated world of the test lab, bringing all aspects of the infrastructure into play so that bottlenecks and other nasty surprises can be uncovered and fixed before your customers find them!
In this example, we will demonstrate how one person using AppMetrics Load Diagnostics can do in just a matter of minutes what it would otherwise require a team of people many hours if not days to do. By simply recording a sample series of transactions against a production Web application (in this case, the Fitch and Mather Stocks application), an automated test involving dozens of distributed load agents is created, scheduled and run in two passes - a "light load" scenario and a "heavy load" scenario.
Scenario One - Light Load (10 users)
The following graph shows the "outside" view of a 2-minute test against the Fitch and Mather Stocks application, ramping up from 1 to 10 users during the first minute and maintaining at 10 for the second minute.
The metrics in this chart - hits per second, KB per second (or throughput), average site delay and errors - represent the raw measuring stick of what the application's performance "looks" like from the Internet Cloud. Each virtual user accesses the site from the Internet, not from inside the firewall like most load tests. The totals at the top of the graph show how many raw page hits, how much bandwidth and how many virtual users were involved over this two-minute period.
Notice the red bars on the right side of the graph - these indicate the average response time of each page request, as indicated in the "Average Site Delay" column. At one virtual user, the average response is a mere 0.2 seconds. But at 10 virtual users, we are all the way up to a second - a 5X increase already!
In this case, the green bars chart the values in the "Active" column, indicating the number of virtual users active at any point in time. It can also chart the KB/sec or throughput column, which will graphically show you when a bandwidth limit has been reached and has begun to manifest as a slowdown in overall performance. This can be an invaluable aid in verifying Service Level Agreements and in showing you which pages are the most demanding in terms of bandwidth when you are looking for ways to increase performance.
Finally, another key metric is the "Site Errors" column. Although this test was error-free, oftentimes scalability issues only arise when load is applied and the server begins responding with "Internal Server Error", "Server Too Busy" and other response codes that point to a problem that needs to be addressed. So although your application may have the capacity to scale up, it lacks the scalability to do so, and only this kind of testing will reveal such limitations.
Now let's go one level "deeper" and see which pages were the slowest. Before that, however, let's take a look at what each virtual user does during the test.
Now let's see which pages were the slowest:
As you can see, selling stocks, browsing the online store, viewing the portfolio, updating the shopping cart and viewing product details were the slowest pages. The other ones hardly registered at all!
Now let's go "under the hood" and look at what AppMetrics for Transactions shows us about the transactions, components and methods that were involved in the test. First let's see the pattern of transactions that were completed during this two minute test:
Now let's look at the transactions that were involved:
As you can see, the DBHelper transaction takes most of the time - 8 seconds overall - followed by the BusAccount transaction, the BusTicker transaction and the BusBroker transaction. Going another level deeper, we see the components involved in these same transactions:
To drill further down, let's pick one of the components -BusAccount - and look at the distribution of the SellStock method:
Here you can see that most of the instances of the SellStock method took somewhere between 43 and 59 milliseconds to complete, but a good number took more than 105 - hardly anything in human time, but a virtual eternity in computer time!
And now let's drill down even further:
OK, now let's crank it up to 50 users and see what happens!
Scenario Two - Heavy Load (50 Users)
Now we can see that response times go from 0.2 seconds to almost 6 seconds - a 30X increase - and throughput goes from 1.8 kb to almost 60! We also delivered 1,375 page hits and consumed almost 4MB of bandwidth over two minutes, as compared to 404 hits and 1.1MB at 10 virtual users.
Now let's see how the individual pages performed:
Here we can see that the bottlenecks are in the checkout and shopping cart pages - 8 seconds on average, almost unbearable to most people!
Now let's have a look at the transaction level view
Now it is the Bus.Account transaction that is taking the most time - almost 30 seconds out of the 2 minutes the test was run!
And the component view:
Here we can see that the most active component - DBHelper - took over 35 seconds out of the two minutes instead of the 10 seconds with just 10 users.
Now for the view of the SellStock method:
In this case, the average time was 90 milliseconds instead of 80, but the maximum was 250 - while that may be only a quarter of a second, multiply it by 50 and you can see exactly where the delays are happening!
A Web application is only as fast as its slowest hardware or software component, just like a chain is only as strong as its weakest link. Finding out which component it is, whether it is in the hardware or the software, and which part of that component is causing the slowdown can be an extremely difficult and complicated procedure. AppMetrics Load Diagnostics gives you the tools you need to get a true "end to end" view of your application's performance - from the Internet Cloud itself all the way down to the transaction, component or method in your application's servers. And all of this can be done by a single person in a matter minutes rather than days. That means it can be done whenever changes of any kind are made to the application or its infrastructure!
If you would like to discuss this service for your site, please visit http://www.xtremesoft.com/load_diagnostics.asp.