Objectives of metrics are not only to measure but also understand the progress to the organizational goal. The parameters for determining the metrics for an application are:
- Duration.
- Complexity.
- Technology Constraints.
- Previous experience in same technology.
- Business domain.
- Clarity of the scope of the project.
One interesting and useful approach to arrive at the suitable metrics is using
the Goal-Question-Metric Technique.
The GQM model consists of three layers: a Goal, A set of Questions, and lastly a set of corresponding Metrics. It is thus a hierarchical structure starting with a goal(specifying purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken).
The goal is refined into several questions that usually break down the issue into its major components. Each question is then refined into metrics, some of them objective, some of them subjective. The same metric can be used in order to answer different questions under the same goal. Several GQM models can also have questions and metrics in common, making sure that when the measure is actually taken, the different viewpoints are taken into account correctly.
Metrics are determined when the requirements are understood in a high-level, at this stage, the team size, project size must be known to an extent, in which the project is at a "defined" stage.
Wednesday, December 29, 2010
How are the metrics determined for your application?
Posted by Sunflower at 12/29/2010 01:02:00 PM 0 comments
Labels: Application, Approach, Bugs, Defects, Errors, Goals, Measure, Metrics, Objects, Process, Progress, Quality, Quality metrics, Questions, Requirements, Software testing, Stages, Techniques
Subscribe by Email |
|
Tuesday, December 28, 2010
What are different types of general metrics? Continued...
- Design To Requirements Traceability
this metric provides the analysis on the number of design elements matching requirements to the number of design elements not matching requirements. It is calculated at stage completion and calculated from software requirements specification and detail design.
Formula:
Number of design elements.
Number of design elements matching requirements.
Number of design elements not matching requirements.
- Requirements to Test case Traceability
This metric provides the analysis on the number of requirements tested vs the number of requirements not tested. It is calculated at stage completion. It is calculated from software requirements specification, detail design and test case specification.
Formula:
Number of requirements.
Number of requirements tested.
Number of requirements not tested.
- Test cases to Requirements Traceability
This metric provides the analysis on the number of test cases matching requirements vs the number of test cases not matching requirements. It is calculated at stage completion. It is calculated from software requirements specification and test case specification.
Formula:
Number of requirements.
Number of test cases with matching requirements.
Number of test cases not matching requirements.
- Number of defects in coding found during testing by severity
This metric provides the analysis on the number of defects by the severity. It is calculated at stage completion. It is calculated from bug report.
Formula:
Number of defects.
Number of defects of low priority.
Number of defects of medium priority.
Number of defects of high priority.
- Defects - state of origin, detection, removal
This metric provides the analysis on the number of defects by the stage of origin, detection and removal. It is calculated at stage completion. It is calculated from bug report.
Formula:
Number of defects.
Stage of origin.
Stage of detection.
Stage of removal.
- Defect Density
This metric provides the analysis on the number of defects to the size of the work product. It is calculated at stage completion. It is calculated from defects list and bug report.
Formula:
Defect Density = [total number of defects/Size(FP/KLOC)] *100
Posted by Sunflower at 12/28/2010 12:59:00 PM 0 comments
Labels: Bugs, Calculate, Defects, efficient, Effort, Errors, Estimation, Formula, Metrics, Process, Project, Quality metrics, Review, Reviews, Software testing, Stages, Variance
Subscribe by Email |
|
Monday, December 27, 2010
What are different types of general metrics? Continued...
- Review Effectiveness
This metric will indicate the effectiveness of the review process.It is calculated at the completion of review or completion of testing stage. It is calculated from peer review report, peer review defect list and bugs reported by testing.
Formula:
Review Effectiveness = [(Number of defects found by reviews)/((Total number of defects found y reviews)+Testing)] * 100
- Total number of defects found by reviews
This metric will indicate the total number of defects identified by the review process. the defects are further categorized as high, medium or low. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.
Formula: Total number of defects identified in the project.
- Defect vs Review Effort - Review Yield
This metric will indicate the effort expended in each stage for reviews to the defects found. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.
Formula : Defects/Review Effort
- Requirements Stability Index (RSI)
This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and base lined between company and the client. It is calculated at stage completion and project completion. It is calculated from change request and software requirements specification.
Formula:
RSI = 100 * [(Number of base-lined requirements)-(Number of changes in requirements after the requirements are base-lined)]/(Number of base-lined requirements)
- Change Requests by State
This metric provides the analysis on state of the requirements. It is calculated at stage completion. It is calculated from change request and software requirements specification.
Formula:
Number of accepted requirements, Number of rejected requirements, Number of postponed requirements.
- Requirements to Design Traceability
This metric provides the analysis on the number of requirements designed to the number of requirements that were not designed. It is calculated at stage completion. It is calculated from software requirement specification and detail design.
Formula:
Total number of requirements, Number of requirements designed, Number of requirements not designed.
Posted by Sunflower at 12/27/2010 06:58:00 PM 0 comments
Labels: Bugs, Calculate, Defects, efficient, Effort, Errors, Estimation, Formula, Metrics, Process, Project, Quality metrics, Review, Reviews, Software testing, Stages, Variance
Subscribe by Email |
|
Sunday, December 26, 2010
What are different types of general metrics? Continued...
- Overall Review Effectiveness
This metric will indicate the effectiveness of the testing process in identifying the defects for a given project during the testing stage. It is calculated at monthly basis and after build completion or project completion. It is calculated from test reports and customer identified defects.
Overall Test Effectiveness OTE = [(Number of defects found during testing)/(Total number of defects found during testing + Number of defects found during post delivery)] *100
- Effort Variance (EV)
This metric gives the variation of actual efforts vs. the estimated effort. This is calculated for each project stage. It is calculated at stage completion as identified in SPP. It is calculated from estimation sheets for estimated values in person hours, for each activity within a given stage and actual worked hours values in person hours.
EV = [(Actual person hours - Estimated person hours)/Estimated person hours] * 100
- Cost Variance (CV)
This metric gives the variation of actual cost vs the estimated cost. This is calculated for each project stage. It is calculated at stage completion. It is calculated from estimation sheets for estimated values in dollars or rupees for each activity within that stage and the actual cost incurred.
CV = [(Actual Cost-Estimated Cost)/Estimated Cost] * 100
- Size Variance
This metric gives the variation of actual size vs the estimated size. This is calculated for each project stage. It is calculated at stage and project completion. It is calculated from estimation sheets for estimated values in function points or KLOC and from actual size.
Size Variance = [(Actual Size-Estimated Size)/Estimated Size] * 100
- Productivity on Review reparation - Technical
This metric will indicate the effort spent on preparation for review. This is to use this to calculate for languages used in the project.It is calculated at monthly or after build completion. It is calculated from peer review report.
For every language used, calculate
(KLOC or FP)/hour(*Language) where Language - C,C++, Java,XML etc...
- Number of defects found per review meeting
This metric will indicate the number of defects found during the review meeting across various stages of the project. It is calculated at monthly or after the completion of review. it is calculated from peer review report and peer review defect list.
Formula : Number of defects/Review Meeting
- Review Team Efficiency(Review Team Size Vs Defects Trend)
This metric will indicate the review team size and the defects trend. This will help to determine the efficiency of the review team. It is calculated at monthly and completion of review. It is calculated from peer review report and peer review defect list.
Formula : Review team size to the defects trend.
Posted by Sunflower at 12/26/2010 05:03:00 PM 0 comments
Labels: Bugs, Calculate, Defects, efficient, Effort, Errors, Estimation, Formula, Metrics, Process, Project, Quality metrics, Review, Reviews, Software testing, Stages, Variance
Subscribe by Email |
|
Saturday, December 25, 2010
What are different types of metrics?
Software Process Metrics are measures which provide information about the performance of the development process itself. The purpose of software process metrics are:
- to provide an indicator to the ultimate quality of software being produced.
- assists to the organization to improve its development process by highlighting the areas of inefficiency or error-prone areas of the process.
Software Product Metrics are measures of some attributes of the software product. The purpose of software product metrics is to assess the quality of the output.
What are most general metrics?
Requirements Management
Metrics Collected:
- requirements by state - accepted, rejected, postponed.
- number of base lined requirements.
- number of requirements modified after base lining.
Derived Metrics:
- Requirements stability Index(RSI)
- Requirements to Design Traceability
Project Management
Testing and Review
Metrics Collected:
- Number of defects found by reviews.
- Number of defects found by testing.
- Number of defects found by client.
- Total number of defects found by reviews.
Derived Metrics:
- Overall review effectiveness (ORE)
- Overall test effectiveness.
Peer Reviews
Metrics Collected:
- KLOC/FP per person hour for preparation.
- KLOC/FP per person hour for review meeting.
- Number of pages/hour reviewed during preparation.
- Average number of defects found by Reviewer during preparation.
- Number of pages/hour reviewed during review meeting.
- Average number of defects found by Reviewer during review meeting.
- Review team size vs defects.
- Review speed vs defects.
- Major defects found during review meeting.
- Defects vs Review Effort.
Derived Metrics:
- Review effectiveness.
- Total number of defects found by reviews for a project.
Posted by Sunflower at 12/25/2010 03:58:00 PM 0 comments
Labels: Design, Measure, Metrics, Process, Product Metrics, Quality metrics, Requirements Analysis, Software, Software Process Metrics, Software testing, Testing
Subscribe by Email |
|
Friday, December 24, 2010
What is a Metric? What are different metrics used for testing?
Metric is a measure to quantify software, software development resources and/or software development process. A metric can quantify any of the following factors:
- Schedule
- Work Effort
- Product Size
- Project Status
- Quality Performance
Metric enables estimation of future work. That is, considering the case of testing depending the product is fit for shipment or delivery depends on the rate the defects are found and fixed. Defect collected and fixed is one kind of metric. It is beneficial to classify metrics according to their usage.
Defects are analyzed to identify which are the major causes of defect and which is the phase that introduces most defects. This can be achieved by performing pareto analysis of defect causes and defect introduction phases. The main requirements for any of these analysis is Software Defect Metrics.
Metric is represented as percentage. It is calculated at stage completion or project completion. It is calculated from bug reports and peer review reports.
Few of the Defect Metrics are:
- Defect Density: (Number of defects reported by SQA + Number of defects reported by peer review)/ Actual Size.
The size can be in KLOC, SLOC, or Function points. The method used in the organization to measure the size of the software product. The SQA is considered to be the part of the software testing team.
- Test Effectiveness: t/(t+Uat) where t= total number of defects reported during testing and Uat= total number of defects reported during user acceptance testing. User acceptance testing is generally carried out using the acceptance test criteria according to the acceptance test plan.
- Defect removal Efficiency:
(Total number of Defects Removed / Total Number of Defects Injected) * 100 at various stages of SDLC. This metric will indicate the effectiveness of the defect identification and removal in stages for a given project.
Requirements: DRE = [(Requirements defects corrected during requirements phase)/(Requirement Defects injected during requirements phase)] * 100
Design: DRE = [(Design defects corrected during design phase)/(Defects identified during requirements phase + Defects injected during design phase)] * 100
Code: DRE = [(Code defects corrected during coding phase)/(Defects identified during requirements phase + Defects identified during design phase + Defects injected during design phase)] * 100
Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100
- Defect Distribution: Percentage of Total defects distributed across requirements analysis, design reviews, code reviews, unit tests, integration tests, system tests, user acceptance tests, review by project leads and project managers.
Posted by Sunflower at 12/24/2010 02:12:00 PM 0 comments
Labels: Code, Coding, Defect Metrics, Defects, Design, Estimation, Factors, Metrics, Project, Requirements, Requirements Analysis, SDLC, Software, Software testing
Subscribe by Email |
|
Thursday, December 23, 2010
What is a defect and what is defect management ?
Defects determine the effectiveness of the testing what we do. If there are no defects, it directly implies that we do not have our job. There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct. This implies that we lack the knack.
For a test engineer, a defect is:
- any deviation from specification.
- anything that causes user dissatisfaction.
- incorrect output.
- software does not do what it intended to do.
Software is said to have bug if it features deviates from specifications.
Software is said to have defect if it has unwanted side effects.
Software is said to have error if it gives incorrect output.
Categories of Defects
All software defects can be broadly categorized into the below mentioned types:
- errors of commission : something wrong is done.
- errors of omission : something left out by accident.
- errors of clarity and ambiguity : different interpretations.
- errors of speed and capacity.
Types of defects that can be identified in different software applications are conceptual bugs, coding bugs, integration bugs, user interface bugs, functionality, communication, command structure, missing commands, performance, output, error handling errors, boundary-related errors, calculation errors, initial and later states, control flow errors, errors in handling data, race condition errors, load conditions errors, hardware errors, source and version control errors, documentation errors and testing errors.
Posted by Sunflower at 12/23/2010 09:51:00 PM 0 comments
Labels: Bugs, Commission, Defect, Defects, Deviation, Effective, Errors, Incorrect, Interpret, Omission, Output, Quality, Software, Software testing, User
Subscribe by Email |
|
Wednesday, December 22, 2010
Performance Tests Precedes Load Tests
The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined. Developing performance test scripts at such an early stage provides opportunity to identify and re mediate serious performance problems and expectations before load testing commences. For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'. However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take two seconds. Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order. Performance tests provide a means for this education.
Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing. When performance testing of a 'customer search' screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.
Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links.
Posted by Sunflower at 12/22/2010 04:59:00 PM 0 comments
Labels: Changes, Components, Layers, Load, Load Testing, Performance, Performance testing, Performance Tests, Process, Protocols, Software testing, Target, Tests, Web system
Subscribe by Email |
|
Tuesday, December 21, 2010
Pre-requisites for Performance Testing
A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like.
- Production Like Environment
Performance tests need to be executed on the same specification equipment as production if the results are to have integrity.Lightweight transactions that do not require significant processing can be tested but only substantial deviations from expected transaction response times should be reported. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.
- Production Like Configuration
The configuration of each component needs to be production like. For example: database configuration and operating system configuration. While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported.
- Production Like Version
The version of software to be tested should closely resemble the version to be used in production. Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.
- Production Like Access
If clients will access the system over a WAN, dial up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method. Only tests using production like access are valid.
- Production Like Data
All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.
Posted by Sunflower at 12/21/2010 09:14:00 PM 0 comments
Labels: Changes, Clients, Components, Data, Load, Load Testing, Performance, Performance testing, Performance Tests, Pre-requisites, Process, Protocols, Software testing, Target, Tests, Version
Subscribe by Email |
|
Targeted Infrastructure Tests and Performance Testing
TARGETED INFRASTRUCTURE TESTS
Targeted Infrastructure tests are isolated tests of each layer and or component in an end to end application configuration.
- It includes communications infrastructure, load balancers, web servers, application servers, crypto cards, citrix servers allowing for identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level.
- Each test can be quite simple.
- Targeted infrastructure testing separately generates load on each component and measures the response of each component under load.
- Different infrastructure tests require different protocols.
PERFORMANCE TESTS
These are the tests that determine end to end timing of various time critical business processes and transactions, while the system is under low load, but with a production sized database.
- This sets best possible performance expectation under a given configuration of infrastructure.
- It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.
- Performance testing would highlight such a slow customer search transaction which could be re mediated prior to a full end to end load test.
- The best practice to develop performance tests is with n automated tool such as WinRunner, so that the response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.
- A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.
Posted by Sunflower at 12/21/2010 04:29:00 PM 0 comments
Labels: Changes, Components, Layers, Load, Load Testing, Performance, Performance testing, Performance Tests, Process, Protocols, Software testing, Target, Targeted Infrastructure Tests, Tests
Subscribe by Email |
|
Monday, December 20, 2010
How does stress test execute?
A stress test starts with a load test, and then additional activity is gradually increased until something breaks. An alternative type of stress test is a load test with sudden bursts of additional activity. The sudden bursts of activity generate substantial activity as sessions and connections are established, where as a gradual ramp-up in activity pushes various values past fixed system limitations.
Ideally, stress tests should incorporate two runs, one with burst type activity and the other with gradual ramp-up to ensure that the system under test will not fail catastrophically under excessive load. System reliability under severe load should not be negotiable and stress testing will identify reliability issues that arise under severe levels of load.
An alternative, or supplemental stress test is commonly referred to as a spike test, where a single short burst of concurrent activity is applied to a system. Such tests are typical of simulating extreme activity where a count down situation exists. For example, a system that will not take orders for a new product until a particular date and time. If demand is very strong, then many users will be poised to use the system the moment the count down ends, creating a spike of concurrent requests and load.
Posted by Sunflower at 12/20/2010 04:08:00 PM 0 comments
Labels: Activity, Conditions, Environment, Execute, Failure, Focus areas, Load Testing, Loads, Performance, Quality, Simulate, Software testing, Stress, Stress testing, Target, Test Execution
Subscribe by Email |
|
Saturday, December 18, 2010
Overview of Stress Testing and its Focus..
Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to load testing, which attempts to simulate anticipated load. It is important to know in advance if a stress situation will result in catastrophic system failure, or if everything just goes really slow. There are various varieties of stress tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructure and contribute to downtime, a stress-full environment to support staff and managers, as well as possible financial losses. If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed.
Before conducting a stress test, it is usually advisable to conduct targeted infrastructure tests on each of the key components in the system. A variation on targeted infrastructure tests would be to execute each one as a mini stress test.
What is the focus of stress tests?
In a stress event, it is most likely that many more connections will be requested per minute than under normal levels of expected peak activity. In many stress situations, the actions of each connected user will not be typical of actions observed under normal operating conditions. This is partly due to the slow response and partly due to the root cause of the stress event.
If we take an example of a large holiday resort web site, normal activity will be characterized by browsing, room searches and bookings. If a national online news service posted a sensational article about the resort and included a URL in the article, then the site may be subjected to a huge number of hits, but most of the visits would probably be a quick browse. It is unlikely that many of the additional visitors would search for rooms and it would be even less likely that they would make bookings. However, if instead of a news article, a national newspaper advertisement erroneously understand the price of accommodation, then there may well be an influx of visitors who cl amour to book a room, only to find that the price did not match their expectations.
In both of the above situations, the normal traffic would be increased with traffic of a different usage profile. So, a stress test design would incorporate a load test as well as additional virtual users running a special series of stress navigations and transactions.
For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the load test. However, one must then keep in mind that a system failure with that type of activity may be different to the type of failure that may occur if a special series of stress navigations were utilized for stress testing.
Posted by Sunflower at 12/18/2010 08:55:00 PM 0 comments
Labels: Activity, Conditions, Environment, Failure, Focus areas, Load Testing, Loads, Performance, Quality, Simulate, Software testing, Stress, Stress testing, Target
Subscribe by Email |
|
Friday, December 17, 2010
What is Long Session Soak Testing ?
When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not logins and transactions per day, but transactions per active user for each user each day. This type of situation occurs in internal systems, such as ERP and CRM systems, where user logins and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time frame rather than just pump multiple days worth of transactions through the system.
Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.
The duration of most soak tests is often determined by the available time in the test lab. There are many applications that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of soak test.
Posted by Sunflower at 12/17/2010 07:29:00 PM 0 comments
Labels: Databases, Functions, Levels, Load, Long session, Memory, Monitor, Multi-tired system, Problems, Response time, Sessions, Soak test, Soak Testing, Software testing, Test cases, Transactions
Subscribe by Email |
|
Thursday, December 16, 2010
Overview of Soak testing.
Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed. Also, it is possible that a system may stop working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test.
There are some typical problems identified during soak tests are:
- Serious memory leaks that would eventually result in memory crisis.
- Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
- Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
- Gradual degradation of response time of some functions as internal data structures becomes less efficient during a long test.
Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record its memory usage at the start and end of the soak test. It is also important to monitor internal memory usages of facilities such as Java virtual machines, if applicable.
Posted by Sunflower at 12/16/2010 06:52:00 PM 0 comments
Labels: Appliaction, Databases, Functions, Levels, Load, Memory, Monitor, Multi-tired system, Problems, Response time, Soak test, Soak Testing, Software testing, Test cases, Transactions
Subscribe by Email |
|
Wednesday, December 15, 2010
Overview of Reporting on response time at various levels of load, Fail-over Tests, Fail-back Testing
REPORTING ON RESPONSE TIME AT VARIOUS LEVELS OF LOAD
Expected output from a load test often includes a series of response time measures at various levels of load. It is important when determining the response time at any particular level of load, that the system has run in a stable manner for a significant amount of time before taking measurements.
For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize. Taking measurements over the next ten minutes would then give a meaningful result. The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.
FAIL-OVER TESTS
Failover tests verify of redundancy mechanisms while the system is under load. This is in contrast to load tests which are conducted under anticipated load with no component failure during the course of a test. For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies.
Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outrage. It also provides a baseline of failover capability so that a sick server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load.
FAIL-BACK TESTING
After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.
Posted by Sunflower at 12/15/2010 02:50:00 PM 0 comments
Labels: Components, Fail-back testing, Fail-over testing, Failure, Load, Load Test, Load Testing, LoadRunner, Quality, Software testing, Stress, Test cases, Verify
Subscribe by Email |
|
Tuesday, December 14, 2010
How to set up a Load Test using LoadRunner?
The important thing to understand in executing such a load test is that the load is generated at a protocol level, by the load generators, that are running scripts that are developed with the VUGen tool. Transaction times derived from the VUGen scripts do not include processing time on the client PC, such as rendering (drawing parts of the screen) or execution of client side scripts such as JavaScript.The WinRunner PC(s) is utilized to measure end user experience response times. Most load tests would not employ a WinRunner PC to measure actual response times from the client perspective but is highly recommended where complex and variable processing is performed on the desktop after data has been delivered to the client.
The LoadRunner controller is capable of displaying real-time graphs of response times as well as other measures such as CPU utilization on each of the components behind the firewall. Internal measures from products such as Oracle, Websphere are also available for monitoring during test execution.
After completion of a test, the analysis engine can generate a number of graphs and correlations to help locate any performance bottlenecks.
In simplified load test, the controller communicates directly to a load generator that can communicate directly to the load balancer. No WinRunner PC is utilized to measure actual user experience. The collection of statistics from various components is simplified as there is no firewall between the controller and the web components being measured.
Posted by Sunflower at 12/14/2010 05:06:00 PM 0 comments
Labels: CPU, Data, Load, Load generator, Load Test, LoadRunner, Response, Response time, Test cases, Test Execution, Test Scripts, Tools, Transaction, Utilization, VUGen
Subscribe by Email |
|
Monday, December 13, 2010
What is the purpose of load tests?
The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories:
- Quantification of risks :
Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is traditional quality assurance(QA) type test. The load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.
- Determination of minimum configuration : Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations, so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.
Basis for determining the business functions/processes to be included in a test
- High Frequency Transactions : The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
- Critical Transactions : The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has the greatest impact.
- Read Transactions : At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.
- Update Transactions : At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.
Posted by Sunflower at 12/13/2010 01:46:00 PM 0 comments
Labels: Bugs, Changes, Database design, Defects, End to End, Errors, Inputs, Load, Load Test, Load Testing, Output, Performance, Performance testing, Production, Project, Tests
Subscribe by Email |
|
What are Load Tests - End to End performance tests
Load Tests are end to end performance tests under anticipated production load. The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations. Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. An important variation of the load test is the network sensitivity test which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN.
Load tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate 'usage profiles' that can be used as part of the testing process, and can even be used to drive large portions of load test.
Load testing must be executed on today's production size database, and optionally with a projected database. If some database tables will be much larger in some months time, then load testing should also be performed against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed service level agreements.
Posted by Sunflower at 12/13/2010 01:22:00 PM 0 comments
Labels: Bugs, Changes, Database design, Defects, End to End, Errors, Inputs, Load, Load Test, Load Testing, Output, Performance, Performance testing, Production, Project, Tests
Subscribe by Email |
|
Friday, December 10, 2010
Define Unit Test Case, Integration Test Case, System Test case
UNIT TEST CASES(UTC)
The unit test cases are very specific to a particular unit. The basic functionality of the unit is to be understood based on the requirements and the design documents. Generally, design document will provide a lot of information about the functionality of a unit. The design document has to be referred before a unit test case is written because it provides the actual functionality of how the system must behave, for given inputs.
INTEGRATION TEST CASES
Before designing the integration test cases the testers should go through the integration test plan. It will give complete idea of how to write integration test cases. The main aim of integration test cases is that it tests the multiple modules together. By executing these test cases the user can find out the errors in the interfaces between the modules.
The tester has to execute unit and integration test cases after coding.
SYSTEM TEST CASES
the system test cases meant to test the system as per the requirements; end to end. This is basically to make sure that the application works as per the software requirement specification. In system test cases, the testers are supposed to act as an end user. so, system test cases normally do concentrate on the functionality of the system, inputs are fed through the system and each and every check is performed using the system itself. Normally, the verifications are done by checking the database tables directly or running programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program units. When it comes to system testing, it is assumed that the interfaces
between the modules are working fine.
Ideally the test cases are nothing but a union of the functionalities tested in the unit testing and the integration testing. Instead of testing the system inputs and outputs through database or external programs, everything is tested through the system itself. In system testing, the tester will mimic as an end user and hence checks the application through its output.
Sometimes, some of the integration and unit test cases are repeated in system testing also especially when the units are tested with test stubs before and not actually tested with other real modules, during system testing those cases will be performed again with real modules.
Posted by Sunflower at 12/10/2010 02:07:00 PM 0 comments
Labels: Application, Design, Expected, Inputs, Integration test case, Outputs, Process, Results, Software testing, Specification, Steps, System test case, Techniques, Test cases, Unit Test case, UTC
Subscribe by Email |
|
What are Test Case Documents and what is the general format of test cases?
The test cases will have a generic format as below:
- Test Case ID : The test case id must be unique across the application.
- Test case description : The test case description should be very brief.
- Test Prerequisite : The test pre-requisite clearly describes what should be present in the system, before the test executes.
- Test Inputs : The test input is nothing but the test data that is prepared to be fed to the system.
- Test Steps : The test steps are the step-by-step instructions on how to carry out the test.
- Expected Results : The expected results are the ones that say what the system must give as output or how the system must react based on the test steps.
- Actual results : The actual results are the ones that say outputs of the action for the given inputs or how the system reacts for the given inputs.
- Pass/Fail : If the expected and actual results are same then test id Pass otherwise Fail.
The test cases are classified into positive and negative test cases.Positive test cases are designed to prove that the system accepts the valid inputs and then process them correctly. Suitable techniques to design the positive test cases are specification derived tests. The negative test cases are designed to prove that the system rejects invalid inputs and does not process them. Suitable techniques to design the negative test cases are error guessing, boundary value analysis, internal boundary value testing and state transition testing. The test cases details must be very clearly specified, so that a new person can go through the test cases step by step and is able to execute it.
In an online shopping application, at the user interface level, the client request the web server to display the product details by giving email id and username. The web server processes the request and will give the response. For this application, we design the unit, integration and system test cases.
Posted by Sunflower at 12/10/2010 01:08:00 PM 0 comments
Labels: Actual, Application, Design, Expected, Format, General, Inputs, Negative, Outputs, Positive, Process, Results, Software testing, Specification, Steps, Techniques, Test cases, Test ware development
Subscribe by Email |
|
Wednesday, December 8, 2010
What are Test Case Documents and how to design good test cases?
Designing good test cases is a complex art. The complexity comes from three sources:
- Test cases help us discover information. Different types of tests are more effective for different classes of information.
- Test cases can be good in a variety of ways. No test case will be good in all of them.
- People tend to create test cases according to certain testing styles, such as domain testing or risk based testing. Good domain tests are different from good risk based tests.
A test case specifies the pretest state of the IUT and its environment, the test inputs or conditions, and the expected result. The expected result specifies what the IUT should produce from the test inputs. The specification includes messages generated by the IUT, exceptions, returned values, and resultant state of the IUT and its environment. Test cases may also specify initial and resulting conditions for other objects that constitute the IUT and its environment.
A scenario is a hypothetical story, used to help a person think through a complex problem or system.
Characteristics of Good Scenarios
A scenario test has five key characteristics:
a story that is motivating, credible, complex and easy to evaluate.The primary objective of test case design is to derive a set of tests that have the highest attitude of discovering defects in the software. Test cases are designed based on the analysis of requirements. use cases, and technical specifications, and they should be developed in parallel with the software development effort.
A test case describes a set of actions to be performed and the results that are expected. A test case should target specific functionality or aim to exercise a valid path through a use case. This should include invalid user actions and illegal inputs that are not necessarily listed in the use case. A test case is described depends on several factors, e.g. the number of test cases, the frequency with which they change, the level of automation employed, the skill of the testers, the selected testing methodology, staff turnover, and risk.
Posted by Sunflower at 12/08/2010 12:37:00 PM 0 comments
Labels: Actions, Characteristics, Create, Design, Domain testing, Effective, Functionality, Results, Scenarios, Software testing, state, Test cases, Tests, Types
Subscribe by Email |
|
Tuesday, December 7, 2010
What comprises Test Ware Development : Test Plan - Acceptance Test Plan (ATP)
The client at their place performs the acceptance testing. It will be very similar to the system test performed by the software development unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to the system test, can be implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.
Test Plan Outline
- BACKGROUND: This item summarizes the functions of the application system and the tests to be performed.
- INTRODUCTION
- ASSUMPTIONS: Indicates any anticipated assumptions which will be made while testing the application.
- TEST ITEMS: List each of the items(programs) to be tested.
- FEATURES TO BE TESTED: List each of the features(functions or requirements) which will be tested or demonstrated by the test.
- FEATURES NOT TO BE TESTED: explicitly lists each feature, function, or requirement which would not be tested and why not.
- APPROACH: Describe the data flows and test philosophy. This section also mentions all the approaches which will be followed at the various stages of the test execution.
- ITEM PASS/FAIL CRITERIA: Itemized list of expected output and tolerances.
- SUSPENSION/RESUMPTION CRITERIA: Must the test run from start to finish? Under what circumstances it may be resumed in the middle? Establish check-points in long tests.
- TEST DELIVERABLES: What, besides software, will be delivered? It includes test report and test software.
- TESTING TASKS: It includes functional and administrative tasks.
- ENVIRONMENTAL NEEDS: It includes security clearance, office space and equipment and hardware/software requirements.
- RESPONSIBILITIES: It includes what are the tasks in section 10? What does the user do?
- STAFFING AND TRAINING
- SCHEDULE
- RESOURCES
- RISKS AND CONTINGENCIES
- APPROVALS
The schedule details of the various test pass such as unit tests, integration tests, system tests should be clearly mentioned along with estimated efforts.
Posted by Sunflower at 12/07/2010 12:13:00 PM 0 comments
Labels: Acceptance Test Plan, Client, Outline, Process, Software testing, System Testing, Test cases, Test Plan, Test Planning, Test ware development, User Acceptance testing
Subscribe by Email |
|
Monday, December 6, 2010
What comprises Test Ware Development : Test Plan - System test Plan
The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections present in system test plan:
- What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All the requirements are to be verified in the scope of the system testing. This covers the functionality of the product. Apart from this, what special testing is performed are also stated here.
- Functional groups and the sequence
The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc.
- Special Testing Methods
This covers the different special tests like load/volume testing, stress testing, interoperability testing etc. These testing are to be done based on the nature of the product and it is not mandatory that every one of these special tests must be performed for every product.
Apart from above sections, the following sections are also addressed:
- System Testing Tools
- Priority of functional groups
- Naming Convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
- Build/Refresh Criteria
Posted by Sunflower at 12/06/2010 12:57:00 PM 0 comments
Labels: Development, Functional, Methods, Product, Requirements, Stress, System Testing, Test Plan, Test Planning, Test ware development
Subscribe by Email |
|
Saturday, December 4, 2010
What comprises Test Ware Development : Test Plan - Integration Test Plan
The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections:
- What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical; details but the general approach how the interfaces are triggered.
- Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using the data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.
- List of modules and interface functions
There may be N number of units in the application but the units that are going to communicate with each other, alone are tested in this phase. If the units are designed in such a way that they are manually independent, then the interfaces do not come into picture.This is almost impossible in any system, as the units have to communicate to other units, in order to get different types of functionalities executed. In this section, we need to list the units and for what purpose it talks to the others needs to be mentioned. This will not go into technical aspects, but at a higher level, this has to be explained in plain English.
Apart from above sections, it also includes:
- Integration Testing Tools
- Priority of Program Interfaces
- Naming Convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
- Build/Refresh criteria.
Posted by Sunflower at 12/04/2010 01:53:00 PM 0 comments
Labels: activities, Integration, Integration Test plan, Interfaces, Modules, Request, Scope, Sections, Software tetsing, Test cases, Test Planning, Test ware development, Units
Subscribe by Email |
|
What comprises Test Ware Development : Test Plan - Unit Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the (ETVX) Entry-Task-Validation-Exit criteria are to be mentioned. Entry means the entry point to that phase. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done.
ETVX is a modeling technique for developing worldly and atomic level models. It is a task based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two type of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks. A purpose is also stated and the viewer of the model may also be defined e.g. to management or customer.
Types of Test Plan
Unit Test Plan (UTP)
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual tester, which contains the following sections:
- What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case, mostly the input units will be tested for the format, alignment, accuracy and the totals.
- Sequence of testing
The sequence of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc.
- Basic functionality of units
The independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level.
Apart from these, the following sections are also addressed:
- Unit testing tools
- Priority of program units
- Naming convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
Posted by Sunflower at 12/04/2010 12:48:00 PM 0 comments
Labels: ETVX, Functionality, Levels, Plan, Planning, Sections, Software testing, Strategy, Test Planning, Test ware development, Unit test plan, Units, Validation
Subscribe by Email |
|
Friday, December 3, 2010
What comprises Test Ware Development : Test Strategy Continued...
Test ware development is the key role of the testing team. Test ware comprises of:
Test Strategy
Before starting any testing activities, the team lead will have to think a lot and arrive at a strategy.The following areas are addressed in the test strategy document:
- Test Groups: From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. We need to identify the test groups based on the functionality aspect.
- Test Priorities: Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority levels must be clearly stated.
- Test Status Collections and Reporting:
When test cases are executed, the test leader and the project manager must know where exactly we stand in terms of testing activities. To know where we stand, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed and how many failed etc. Also, how often we collect the status is to be clearly mentioned.
- Test Records Maintenance: When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location.This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories.
- Requirements Traceability Matrix: Ideally, each software developed must satisfy the set of requirements completely. So, right from design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. In this matrix, the rows will have the requirements. For every document, there will be a separate column. So, in every cell, we need to state what section in HLD addressed in every single document, all the individual cells must have valid section ids or names filled in.
- Test Summary: The senior management may like to have a test summary on a weekly or monthly basis. If the project is very critical, they may need it on a daily basis also. It addresses what kind of test summary reports will be produced for the senior management along with the frequency.
Posted by Sunflower at 12/03/2010 01:42:00 PM 0 comments
Labels: Maintenance, matrix, Priority, Records, Requiremnets, SDLC, Software testing, Strategy, Summary, Test cases, Test Groups, Test Strategy, Test Summary, Test ware development, Traceability table
Subscribe by Email |
|
Thursday, December 2, 2010
What comprises Test Ware Development : Test Strategy
Test ware development is the key role of the testing team. Test ware comprises of:
Test Strategy
Before starting any testing activities, the team lead will have to think a lot and arrive at a strategy. This will describe the approach, which is to be adopted for carrying out test activities including the planning activities. This is a formal document and the very first document regarding the testing area and is prepared at a very early stage in software development life cycle. This document must provide generic test approach as well as specific details regarding the project.
The following areas are addressed in the test strategy document:
- Test Levels: The test strategy must talk about what are the test levels that will be carried out for that particular project. Unit, Integration and system testing will be carried out in all projects. But many times, integration and system testing may be combined.
- Roles and Responsibilities: The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project level. the review and approval mechanism must be stated here for test plans and other test documents. Also, we have to state who reviews the test cases, test records and who approved them.The documents may go through a series of reviews or multiple approvals and all these are mentioned in this section.
- Testing tools: Any testing tools which are to be used in different test levels must be clearly identified. This includes justifications for the tools being used in that particular level also.
- Risks and Mitigation: Any risks that will affect the testing process must be listed along with the mitigation. By documenting the risks in this document, we can anticipate the occurrence of it well ahead of time and then we can proactively prevent it from occurring. Sample risks are dependency of completion of coding, which is done by sub-contractors, capability of testing tools etc.
- Regression Test Approach: When a particular problem is identified, the programs will be debugged and the fix will be done to the program. To make sure that the fix works, the program will be tested again for that criteria. Regression test will make sure that one fix does not create some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is affected by a particular fix. how this is going to be carried out is elaborated in this section.
Posted by Sunflower at 12/02/2010 12:13:00 PM 0 comments
Labels: Components, Document, Process, Software, Strategy, System Testing, Test Levels, Test Strategy, Test ware development, Tests
Subscribe by Email |
|
Wednesday, December 1, 2010
Understanding Rapid Testing and Rapid Testing Practice
Rapid testing is the testing software faster than usual without compromising on the standards of quality. It is the technique to test as thorough as reasonable within the constraints. This technique looks at testing as a process of heuristic inquiry and logically speaking it should be based on exploratory testing techniques.
Although most projects undergo continuous testing, it does not usually produce the information required to deal with the situations where it is necessary to make an instantaneous assessment of the product's quality at a particular moment. In most cases the testing is scheduled for just prior to launch and conventional testing techniques often cannot be applied to software that is incomplete or subject to constant change.
It can be said that rapid testing has a structure that is built on a foundation of four components namely:
- People
- Integrated test process.
- Static Testing
- Dynamic testing
There is need for people who can handle the pressure of tight schedules. They need to be productive contributors even though the early phases of the development life cycle. It should also be noted that dynamic testing lies at the heart of the software testing process, and the planning, design, development, and execution of dynamic tests should be performed well for any testing process to be efficient.
Rapid Testing Practice
It would help us if we scrutinize each phase of a development process to see how the efficiency, speed, quality of testing can be improved keeping in mind:
- Actions that the test team can take to prevent defects from escaping.
- Actions that the test team can take to manage risk to the development schedule.
- The information that can be obtained from each phase so that the test team can speed up activities.
If a test process is designed around the answers to these factors, both the speed
of testing and quality of the final product should be enhanced.
Some of the aspects that can be used while rapid testing are:
- Test for link integrity.
- Test for disabled accessibility.
- Test for default settings.
- Check the navigation.
- Check for input constraints by injecting special characters at the sources of data.
- Run multiple instances.
- Check for interdependencies and stress them.
- Test for consistency of design.
- Test for compatibility.
- Test for usability.
- Check for the possible variability's and attack them.
- Go for possible stress and load tests.
Posted by Sunflower at 12/01/2010 01:44:00 PM 0 comments
Labels: Application, Assessment, Continuous, Development, Factors, Fast, Practices, Process, Quality, Rapid Testing, Schedule, Software, Software testing, Structure
Subscribe by Email |
|