Subscribe by Email


Monday, January 3, 2011

How to execute Performance Tests?

Performance testing involves executing the same test case multiple times with data variations for each execution, and then collating response times and computing response time statistics to compare against the formal expectations. Often, performance is different when the data used in the test case is different, as different number of rows are processed in the database, different processing and validation come into play, and so on. By executing a test case many times with different data, a statistical measure of response time can be computed that can be directly compared against a formal stated expectation.

Network sensitivity tests are variations on load tests and performance tests that focus on the Wide Area Network (WAN) limitations and network activity. Network sensitivity tests can be used to predict the impact of a given WAN segment or traffic profile on various applications that are bandwidth dependent. Network issues often arise at low levels of concurrency over low bandwidth WAN segments. Very chatty applications can appear to be more prone to response time degradation under certain conditions than other applications that actually use more bandwidth. For example, some applications may degrade to unacceptable levels of response time when a certain pattern of network traffic uses 50% of available bandwidth, while other applications are virtually un-changed in response time even with 85% of available bandwidth consumed elsewhere.

This is particularly important test for deployment of a time critical application over a WAN. Also, some front end systems such as web servers, need to work much harder with dirty communications compared with clean communications encountered on a high speed LAN in an isolated load and performance testing environment.


No comments:

Facebook activity