One
of the most important parts that together make up the complete and effective
software testing is the performance testing. Perhaps no software system or
application can do without performance testing and testing of any software
system or application is incomplete without this one.
"The determination of the performance of a software system or application in terms
of how and when and responsiveness and stability under different particular
work loads is nothing but performance testing".
Apart from this there are
several other attributes that can be taken care of by the performance testing
like for example:
- Scalability
- Reliability
- Resource
usage and so on.
The
concept of performance testing falls under the concept of performance
engineering. The practice of performance testing is aimed at building the
performance in to the architecture as well as design of the software system or
applications before the actual coding of the software system or application
begins.
We have so many other types of testing that together make up the
complete performance testing and have been mentioned below:
- Load
testing
- Stress
testing
- Soak
testing or Endurance testing
- Spike
testing
- Isolation
testing and
- Configuration
testing
Metrics used during Performance Testing
The
performance testing cannot be carried out all alone by itself! Rather it is supported
by some specific metrics called performance metrics. In this article we shall
discuss about the metrics that are to be used during the performance testing. We shall discuss them one by one:
1. Average
response time:
This is the time that takes in to consideration all the
response cycle ups or round trip requests until a particular point is
reached and is the mean of all the response times. Response times can be
measured in any of the following way:
a) Time to last byte or
b) Time to first byte
2. Peak
response time:
This is similar to
the previously mentioned metric and represents the longest cycle at a
particular point in a particular test. Peak response time is an indication
of some potential problem in our software system or application.
3. Error
rate:
The occurrence of errors under load during the processing of the
requests is perhaps the most expected thing during the testing. it has
been noted that the most of the errors are reported when the load on the
application reaches a peak point and further from there the software
system or application is not able to process any requests. Thus error rate
provides a percentage of the HTTP status code responses as errors on the
servers.
4. Throughput:
This performance metric gives you a percentage of the flow of data back
and forth from the servers and is measured in units of KB per second.
5. Requests
per second (RPS):
It gives a measure of the requests that are sent to the
target server including requests for CSS style sheets, HTML pages, JavaScript
libraries, flash files, XML documents and so on.
6. Concurrent
users:
This performance metric is perhaps that best way for expressing the
load that is being levied on the software system or application during the
performance testing. This metric is not to be equated to the RPS since there
are possibilities of one user only generating a high number of requests
and on the other hand a user not constantly generating requests.
7. Cross
result graphs:
It shows difference between post tuning and pre-tuning tests.
No comments:
Post a Comment