Subscribe by Email


Wednesday, December 29, 2010

How are the metrics determined for your application?

Objectives of metrics are not only to measure but also understand the progress to the organizational goal. The parameters for determining the metrics for an application are:

- Duration.
- Complexity.
- Technology Constraints.
- Previous experience in same technology.
- Business domain.
- Clarity of the scope of the project.

One interesting and useful approach to arrive at the suitable metrics is using
the Goal-Question-Metric Technique.
The GQM model consists of three layers: a Goal, A set of Questions, and lastly a set of corresponding Metrics. It is thus a hierarchical structure starting with a goal(specifying purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken).
The goal is refined into several questions that usually break down the issue into its major components. Each question is then refined into metrics, some of them objective, some of them subjective. The same metric can be used in order to answer different questions under the same goal. Several GQM models can also have questions and metrics in common, making sure that when the measure is actually taken, the different viewpoints are taken into account correctly.

Metrics are determined when the requirements are understood in a high-level, at this stage, the team size, project size must be known to an extent, in which the project is at a "defined" stage.


Tuesday, December 28, 2010

What are different types of general metrics? Continued...

- Design To Requirements Traceability
this metric provides the analysis on the number of design elements matching requirements to the number of design elements not matching requirements. It is calculated at stage completion and calculated from software requirements specification and detail design.

Formula:
Number of design elements.
Number of design elements matching requirements.
Number of design elements not matching requirements.

- Requirements to Test case Traceability
This metric provides the analysis on the number of requirements tested vs the number of requirements not tested. It is calculated at stage completion. It is calculated from software requirements specification, detail design and test case specification.

Formula:
Number of requirements.
Number of requirements tested.
Number of requirements not tested.

- Test cases to Requirements Traceability
This metric provides the analysis on the number of test cases matching requirements vs the number of test cases not matching requirements. It is calculated at stage completion. It is calculated from software requirements specification and test case specification.

Formula:
Number of requirements.
Number of test cases with matching requirements.
Number of test cases not matching requirements.

- Number of defects in coding found during testing by severity
This metric provides the analysis on the number of defects by the severity. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Number of defects of low priority.
Number of defects of medium priority.
Number of defects of high priority.

- Defects - state of origin, detection, removal
This metric provides the analysis on the number of defects by the stage of origin, detection and removal. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Stage of origin.
Stage of detection.
Stage of removal.

- Defect Density
This metric provides the analysis on the number of defects to the size of the work product. It is calculated at stage completion. It is calculated from defects list and bug report.
Formula:
Defect Density = [total number of defects/Size(FP/KLOC)] *100


Monday, December 27, 2010

What are different types of general metrics? Continued...

- Review Effectiveness
This metric will indicate the effectiveness of the review process.It is calculated at the completion of review or completion of testing stage. It is calculated from peer review report, peer review defect list and bugs reported by testing.
Formula:
Review Effectiveness = [(Number of defects found by reviews)/((Total number of defects found y reviews)+Testing)] * 100

- Total number of defects found by reviews
This metric will indicate the total number of defects identified by the review process. the defects are further categorized as high, medium or low. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula: Total number of defects identified in the project.

- Defect vs Review Effort - Review Yield
This metric will indicate the effort expended in each stage for reviews to the defects found. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula : Defects/Review Effort

- Requirements Stability Index (RSI)
This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and base lined between company and the client. It is calculated at stage completion and project completion. It is calculated from change request and software requirements specification.

Formula:
RSI = 100 * [(Number of base-lined requirements)-(Number of changes in requirements after the requirements are base-lined)]/(Number of base-lined requirements)

- Change Requests by State
This metric provides the analysis on state of the requirements. It is calculated at stage completion. It is calculated from change request and software requirements specification.
Formula:
Number of accepted requirements, Number of rejected requirements, Number of postponed requirements.

- Requirements to Design Traceability
This metric provides the analysis on the number of requirements designed to the number of requirements that were not designed. It is calculated at stage completion. It is calculated from software requirement specification and detail design.
Formula:
Total number of requirements, Number of requirements designed, Number of requirements not designed.


Sunday, December 26, 2010

What are different types of general metrics? Continued...

- Overall Review Effectiveness
This metric will indicate the effectiveness of the testing process in identifying the defects for a given project during the testing stage. It is calculated at monthly basis and after build completion or project completion. It is calculated from test reports and customer identified defects.

Overall Test Effectiveness OTE = [(Number of defects found during testing)/(Total number of defects found during testing + Number of defects found during post delivery)] *100

- Effort Variance (EV)
This metric gives the variation of actual efforts vs. the estimated effort. This is calculated for each project stage. It is calculated at stage completion as identified in SPP. It is calculated from estimation sheets for estimated values in person hours, for each activity within a given stage and actual worked hours values in person hours.

EV = [(Actual person hours - Estimated person hours)/Estimated person hours] * 100

- Cost Variance (CV)
This metric gives the variation of actual cost vs the estimated cost. This is calculated for each project stage. It is calculated at stage completion. It is calculated from estimation sheets for estimated values in dollars or rupees for each activity within that stage and the actual cost incurred.

CV = [(Actual Cost-Estimated Cost)/Estimated Cost] * 100

- Size Variance
This metric gives the variation of actual size vs the estimated size. This is calculated for each project stage. It is calculated at stage and project completion. It is calculated from estimation sheets for estimated values in function points or KLOC and from actual size.

Size Variance = [(Actual Size-Estimated Size)/Estimated Size] * 100

- Productivity on Review reparation - Technical
This metric will indicate the effort spent on preparation for review. This is to use this to calculate for languages used in the project.It is calculated at monthly or after build completion. It is calculated from peer review report.

For every language used, calculate
(KLOC or FP)/hour(*Language) where Language - C,C++, Java,XML etc...

- Number of defects found per review meeting
This metric will indicate the number of defects found during the review meeting across various stages of the project. It is calculated at monthly or after the completion of review. it is calculated from peer review report and peer review defect list.

Formula : Number of defects/Review Meeting

- Review Team Efficiency(Review Team Size Vs Defects Trend)
This metric will indicate the review team size and the defects trend. This will help to determine the efficiency of the review team. It is calculated at monthly and completion of review. It is calculated from peer review report and peer review defect list.

Formula : Review team size to the defects trend.


Saturday, December 25, 2010

What are different types of metrics?

Software Process Metrics are measures which provide information about the performance of the development process itself. The purpose of software process metrics are:
- to provide an indicator to the ultimate quality of software being produced.
- assists to the organization to improve its development process by highlighting the areas of inefficiency or error-prone areas of the process.

Software Product Metrics are measures of some attributes of the software product. The purpose of software product metrics is to assess the quality of the output.

What are most general metrics?


Requirements Management
Metrics Collected:
- requirements by state - accepted, rejected, postponed.
- number of base lined requirements.
- number of requirements modified after base lining.
Derived Metrics:
- Requirements stability Index(RSI)
- Requirements to Design Traceability

Project Management
Testing and Review
Metrics Collected:
- Number of defects found by reviews.
- Number of defects found by testing.
- Number of defects found by client.
- Total number of defects found by reviews.
Derived Metrics:
- Overall review effectiveness (ORE)
- Overall test effectiveness.

Peer Reviews
Metrics Collected:
- KLOC/FP per person hour for preparation.
- KLOC/FP per person hour for review meeting.
- Number of pages/hour reviewed during preparation.
- Average number of defects found by Reviewer during preparation.
- Number of pages/hour reviewed during review meeting.
- Average number of defects found by Reviewer during review meeting.
- Review team size vs defects.
- Review speed vs defects.
- Major defects found during review meeting.
- Defects vs Review Effort.
Derived Metrics:
- Review effectiveness.
- Total number of defects found by reviews for a project.


Friday, December 24, 2010

What is a Metric? What are different metrics used for testing?

Metric is a measure to quantify software, software development resources and/or software development process. A metric can quantify any of the following factors:
- Schedule
- Work Effort
- Product Size
- Project Status
- Quality Performance
Metric enables estimation of future work. That is, considering the case of testing depending the product is fit for shipment or delivery depends on the rate the defects are found and fixed. Defect collected and fixed is one kind of metric. It is beneficial to classify metrics according to their usage.
Defects are analyzed to identify which are the major causes of defect and which is the phase that introduces most defects. This can be achieved by performing pareto analysis of defect causes and defect introduction phases. The main requirements for any of these analysis is Software Defect Metrics.
Metric is represented as percentage. It is calculated at stage completion or project completion. It is calculated from bug reports and peer review reports.

Few of the Defect Metrics are:



- Defect Density: (Number of defects reported by SQA + Number of defects reported by peer review)/ Actual Size.
The size can be in KLOC, SLOC, or Function points. The method used in the organization to measure the size of the software product. The SQA is considered to be the part of the software testing team.

- Test Effectiveness: t/(t+Uat) where t= total number of defects reported during testing and Uat= total number of defects reported during user acceptance testing. User acceptance testing is generally carried out using the acceptance test criteria according to the acceptance test plan.

- Defect removal Efficiency:
(Total number of Defects Removed / Total Number of Defects Injected) * 100 at various stages of SDLC. This metric will indicate the effectiveness of the defect identification and removal in stages for a given project.

Requirements: DRE = [(Requirements defects corrected during requirements phase)/(Requirement Defects injected during requirements phase)] * 100
Design: DRE = [(Design defects corrected during design phase)/(Defects identified during requirements phase + Defects injected during design phase)] * 100
Code: DRE = [(Code defects corrected during coding phase)/(Defects identified during requirements phase + Defects identified during design phase + Defects injected during design phase)] * 100
Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100

- Defect Distribution: Percentage of Total defects distributed across requirements analysis, design reviews, code reviews, unit tests, integration tests, system tests, user acceptance tests, review by project leads and project managers.


Thursday, December 23, 2010

What is a defect and what is defect management ?

Defects determine the effectiveness of the testing what we do. If there are no defects, it directly implies that we do not have our job. There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct. This implies that we lack the knack.

For a test engineer, a defect is:
- any deviation from specification.
- anything that causes user dissatisfaction.
- incorrect output.
- software does not do what it intended to do.

Software is said to have bug if it features deviates from specifications.
Software is said to have defect if it has unwanted side effects.
Software is said to have error if it gives incorrect output.

Categories of Defects
All software defects can be broadly categorized into the below mentioned types:
- errors of commission : something wrong is done.
- errors of omission : something left out by accident.
- errors of clarity and ambiguity : different interpretations.
- errors of speed and capacity.

Types of defects that can be identified in different software applications are conceptual bugs, coding bugs, integration bugs, user interface bugs, functionality, communication, command structure, missing commands, performance, output, error handling errors, boundary-related errors, calculation errors, initial and later states, control flow errors, errors in handling data, race condition errors, load conditions errors, hardware errors, source and version control errors, documentation errors and testing errors.


Wednesday, December 22, 2010

Performance Tests Precedes Load Tests

The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined. Developing performance test scripts at such an early stage provides opportunity to identify and re mediate serious performance problems and expectations before load testing commences. For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'. However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take two seconds. Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order. Performance tests provide a means for this education.

Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing. When performance testing of a 'customer search' screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly.

Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links.


Tuesday, December 21, 2010

Pre-requisites for Performance Testing

A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like.

- Production Like Environment
Performance tests need to be executed on the same specification equipment as production if the results are to have integrity.Lightweight transactions that do not require significant processing can be tested but only substantial deviations from expected transaction response times should be reported. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.

- Production Like Configuration
The configuration of each component needs to be production like. For example: database configuration and operating system configuration. While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported.

- Production Like Version
The version of software to be tested should closely resemble the version to be used in production. Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.

- Production Like Access
If clients will access the system over a WAN, dial up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method. Only tests using production like access are valid.

- Production Like Data
All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.


Targeted Infrastructure Tests and Performance Testing

TARGETED INFRASTRUCTURE TESTS


Targeted Infrastructure tests are isolated tests of each layer and or component in an end to end application configuration.
- It includes communications infrastructure, load balancers, web servers, application servers, crypto cards, citrix servers allowing for identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level.
- Each test can be quite simple.
- Targeted infrastructure testing separately generates load on each component and measures the response of each component under load.
- Different infrastructure tests require different protocols.

PERFORMANCE TESTS


These are the tests that determine end to end timing of various time critical business processes and transactions, while the system is under low load, but with a production sized database.
- This sets best possible performance expectation under a given configuration of infrastructure.
- It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.
- Performance testing would highlight such a slow customer search transaction which could be re mediated prior to a full end to end load test.
- The best practice to develop performance tests is with n automated tool such as WinRunner, so that the response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.
- A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment.


Monday, December 20, 2010

How does stress test execute?

A stress test starts with a load test, and then additional activity is gradually increased until something breaks. An alternative type of stress test is a load test with sudden bursts of additional activity. The sudden bursts of activity generate substantial activity as sessions and connections are established, where as a gradual ramp-up in activity pushes various values past fixed system limitations.
Ideally, stress tests should incorporate two runs, one with burst type activity and the other with gradual ramp-up to ensure that the system under test will not fail catastrophically under excessive load. System reliability under severe load should not be negotiable and stress testing will identify reliability issues that arise under severe levels of load.
An alternative, or supplemental stress test is commonly referred to as a spike test, where a single short burst of concurrent activity is applied to a system. Such tests are typical of simulating extreme activity where a count down situation exists. For example, a system that will not take orders for a new product until a particular date and time. If demand is very strong, then many users will be poised to use the system the moment the count down ends, creating a spike of concurrent requests and load.


Saturday, December 18, 2010

Overview of Stress Testing and its Focus..

Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to load testing, which attempts to simulate anticipated load. It is important to know in advance if a stress situation will result in catastrophic system failure, or if everything just goes really slow. There are various varieties of stress tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructure and contribute to downtime, a stress-full environment to support staff and managers, as well as possible financial losses. If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed.
Before conducting a stress test, it is usually advisable to conduct targeted infrastructure tests on each of the key components in the system. A variation on targeted infrastructure tests would be to execute each one as a mini stress test.

What is the focus of stress tests?


In a stress event, it is most likely that many more connections will be requested per minute than under normal levels of expected peak activity. In many stress situations, the actions of each connected user will not be typical of actions observed under normal operating conditions. This is partly due to the slow response and partly due to the root cause of the stress event.

If we take an example of a large holiday resort web site, normal activity will be characterized by browsing, room searches and bookings. If a national online news service posted a sensational article about the resort and included a URL in the article, then the site may be subjected to a huge number of hits, but most of the visits would probably be a quick browse. It is unlikely that many of the additional visitors would search for rooms and it would be even less likely that they would make bookings. However, if instead of a news article, a national newspaper advertisement erroneously understand the price of accommodation, then there may well be an influx of visitors who cl amour to book a room, only to find that the price did not match their expectations.

In both of the above situations, the normal traffic would be increased with traffic of a different usage profile. So, a stress test design would incorporate a load test as well as additional virtual users running a special series of stress navigations and transactions.
For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the load test. However, one must then keep in mind that a system failure with that type of activity may be different to the type of failure that may occur if a special series of stress navigations were utilized for stress testing.


Friday, December 17, 2010

What is Long Session Soak Testing ?

When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not logins and transactions per day, but transactions per active user for each user each day. This type of situation occurs in internal systems, such as ERP and CRM systems, where user logins and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time frame rather than just pump multiple days worth of transactions through the system.

Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.

The duration of most soak tests is often determined by the available time in the test lab. There are many applications that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of soak test.


Thursday, December 16, 2010

Overview of Soak testing.

Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed. Also, it is possible that a system may stop working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test.

There are some typical problems identified during soak tests are:
- Serious memory leaks that would eventually result in memory crisis.
- Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
- Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
- Gradual degradation of response time of some functions as internal data structures becomes less efficient during a long test.

Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record its memory usage at the start and end of the soak test. It is also important to monitor internal memory usages of facilities such as Java virtual machines, if applicable.


Wednesday, December 15, 2010

Overview of Reporting on response time at various levels of load, Fail-over Tests, Fail-back Testing

REPORTING ON RESPONSE TIME AT VARIOUS LEVELS OF LOAD
Expected output from a load test often includes a series of response time measures at various levels of load. It is important when determining the response time at any particular level of load, that the system has run in a stable manner for a significant amount of time before taking measurements.
For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize. Taking measurements over the next ten minutes would then give a meaningful result. The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.

FAIL-OVER TESTS
Failover tests verify of redundancy mechanisms while the system is under load. This is in contrast to load tests which are conducted under anticipated load with no component failure during the course of a test. For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies.
Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outrage. It also provides a baseline of failover capability so that a sick server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load.

FAIL-BACK TESTING
After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.


Tuesday, December 14, 2010

How to set up a Load Test using LoadRunner?

The important thing to understand in executing such a load test is that the load is generated at a protocol level, by the load generators, that are running scripts that are developed with the VUGen tool. Transaction times derived from the VUGen scripts do not include processing time on the client PC, such as rendering (drawing parts of the screen) or execution of client side scripts such as JavaScript.The WinRunner PC(s) is utilized to measure end user experience response times. Most load tests would not employ a WinRunner PC to measure actual response times from the client perspective but is highly recommended where complex and variable processing is performed on the desktop after data has been delivered to the client.

The LoadRunner controller is capable of displaying real-time graphs of response times as well as other measures such as CPU utilization on each of the components behind the firewall. Internal measures from products such as Oracle, Websphere are also available for monitoring during test execution.
After completion of a test, the analysis engine can generate a number of graphs and correlations to help locate any performance bottlenecks.

In simplified load test, the controller communicates directly to a load generator that can communicate directly to the load balancer. No WinRunner PC is utilized to measure actual user experience. The collection of statistics from various components is simplified as there is no firewall between the controller and the web components being measured.


Monday, December 13, 2010

What is the purpose of load tests?

The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories:
- Quantification of risks :
Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is traditional quality assurance(QA) type test. The load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.

- Determination of minimum configuration : Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations, so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.

Basis for determining the business functions/processes to be included in a test


- High Frequency Transactions : The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
- Critical Transactions : The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has the greatest impact.
- Read Transactions : At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.
- Update Transactions : At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.


What are Load Tests - End to End performance tests

Load Tests are end to end performance tests under anticipated production load. The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations. Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. An important variation of the load test is the network sensitivity test which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN.

Load tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate 'usage profiles' that can be used as part of the testing process, and can even be used to drive large portions of load test.

Load testing must be executed on today's production size database, and optionally with a projected database. If some database tables will be much larger in some months time, then load testing should also be performed against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed service level agreements.


Friday, December 10, 2010

Define Unit Test Case, Integration Test Case, System Test case

UNIT TEST CASES(UTC)
The unit test cases are very specific to a particular unit. The basic functionality of the unit is to be understood based on the requirements and the design documents. Generally, design document will provide a lot of information about the functionality of a unit. The design document has to be referred before a unit test case is written because it provides the actual functionality of how the system must behave, for given inputs.

INTEGRATION TEST CASES
Before designing the integration test cases the testers should go through the integration test plan. It will give complete idea of how to write integration test cases. The main aim of integration test cases is that it tests the multiple modules together. By executing these test cases the user can find out the errors in the interfaces between the modules.
The tester has to execute unit and integration test cases after coding.

SYSTEM TEST CASES
the system test cases meant to test the system as per the requirements; end to end. This is basically to make sure that the application works as per the software requirement specification. In system test cases, the testers are supposed to act as an end user. so, system test cases normally do concentrate on the functionality of the system, inputs are fed through the system and each and every check is performed using the system itself. Normally, the verifications are done by checking the database tables directly or running programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program units. When it comes to system testing, it is assumed that the interfaces
between the modules are working fine.
Ideally the test cases are nothing but a union of the functionalities tested in the unit testing and the integration testing. Instead of testing the system inputs and outputs through database or external programs, everything is tested through the system itself. In system testing, the tester will mimic as an end user and hence checks the application through its output.
Sometimes, some of the integration and unit test cases are repeated in system testing also especially when the units are tested with test stubs before and not actually tested with other real modules, during system testing those cases will be performed again with real modules.


What are Test Case Documents and what is the general format of test cases?

The test cases will have a generic format as below:
- Test Case ID : The test case id must be unique across the application.
- Test case description : The test case description should be very brief.
- Test Prerequisite : The test pre-requisite clearly describes what should be present in the system, before the test executes.
- Test Inputs : The test input is nothing but the test data that is prepared to be fed to the system.
- Test Steps : The test steps are the step-by-step instructions on how to carry out the test.
- Expected Results : The expected results are the ones that say what the system must give as output or how the system must react based on the test steps.
- Actual results : The actual results are the ones that say outputs of the action for the given inputs or how the system reacts for the given inputs.
- Pass/Fail : If the expected and actual results are same then test id Pass otherwise Fail.

The test cases are classified into positive and negative test cases.Positive test cases are designed to prove that the system accepts the valid inputs and then process them correctly. Suitable techniques to design the positive test cases are specification derived tests. The negative test cases are designed to prove that the system rejects invalid inputs and does not process them. Suitable techniques to design the negative test cases are error guessing, boundary value analysis, internal boundary value testing and state transition testing. The test cases details must be very clearly specified, so that a new person can go through the test cases step by step and is able to execute it.
In an online shopping application, at the user interface level, the client request the web server to display the product details by giving email id and username. The web server processes the request and will give the response. For this application, we design the unit, integration and system test cases.


Wednesday, December 8, 2010

What are Test Case Documents and how to design good test cases?

Designing good test cases is a complex art. The complexity comes from three sources:
- Test cases help us discover information. Different types of tests are more effective for different classes of information.
- Test cases can be good in a variety of ways. No test case will be good in all of them.
- People tend to create test cases according to certain testing styles, such as domain testing or risk based testing. Good domain tests are different from good risk based tests.
A test case specifies the pretest state of the IUT and its environment, the test inputs or conditions, and the expected result. The expected result specifies what the IUT should produce from the test inputs. The specification includes messages generated by the IUT, exceptions, returned values, and resultant state of the IUT and its environment. Test cases may also specify initial and resulting conditions for other objects that constitute the IUT and its environment.

A scenario is a hypothetical story, used to help a person think through a complex problem or system.

Characteristics of Good Scenarios


A scenario test has five key characteristics:
a story that is motivating, credible, complex and easy to evaluate.The primary objective of test case design is to derive a set of tests that have the highest attitude of discovering defects in the software. Test cases are designed based on the analysis of requirements. use cases, and technical specifications, and they should be developed in parallel with the software development effort.

A test case describes a set of actions to be performed and the results that are expected. A test case should target specific functionality or aim to exercise a valid path through a use case. This should include invalid user actions and illegal inputs that are not necessarily listed in the use case. A test case is described depends on several factors, e.g. the number of test cases, the frequency with which they change, the level of automation employed, the skill of the testers, the selected testing methodology, staff turnover, and risk.


Tuesday, December 7, 2010

What comprises Test Ware Development : Test Plan - Acceptance Test Plan (ATP)

The client at their place performs the acceptance testing. It will be very similar to the system test performed by the software development unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to the system test, can be implemented to acceptance testing also.

Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.

Test Plan Outline


- BACKGROUND: This item summarizes the functions of the application system and the tests to be performed.
- INTRODUCTION
- ASSUMPTIONS: Indicates any anticipated assumptions which will be made while testing the application.
- TEST ITEMS: List each of the items(programs) to be tested.
- FEATURES TO BE TESTED: List each of the features(functions or requirements) which will be tested or demonstrated by the test.
- FEATURES NOT TO BE TESTED: explicitly lists each feature, function, or requirement which would not be tested and why not.
- APPROACH: Describe the data flows and test philosophy. This section also mentions all the approaches which will be followed at the various stages of the test execution.
- ITEM PASS/FAIL CRITERIA: Itemized list of expected output and tolerances.
- SUSPENSION/RESUMPTION CRITERIA: Must the test run from start to finish? Under what circumstances it may be resumed in the middle? Establish check-points in long tests.
- TEST DELIVERABLES: What, besides software, will be delivered? It includes test report and test software.
- TESTING TASKS: It includes functional and administrative tasks.
- ENVIRONMENTAL NEEDS: It includes security clearance, office space and equipment and hardware/software requirements.
- RESPONSIBILITIES: It includes what are the tasks in section 10? What does the user do?
- STAFFING AND TRAINING
- SCHEDULE
- RESOURCES
- RISKS AND CONTINGENCIES
- APPROVALS

The schedule details of the various test pass such as unit tests, integration tests, system tests should be clearly mentioned along with estimated efforts.


Monday, December 6, 2010

What comprises Test Ware Development : Test Plan - System test Plan

The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections present in system test plan:

- What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All the requirements are to be verified in the scope of the system testing. This covers the functionality of the product. Apart from this, what special testing is performed are also stated here.

- Functional groups and the sequence
The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc.

- Special Testing Methods
This covers the different special tests like load/volume testing, stress testing, interoperability testing etc. These testing are to be done based on the nature of the product and it is not mandatory that every one of these special tests must be performed for every product.

Apart from above sections, the following sections are also addressed:
- System Testing Tools
- Priority of functional groups
- Naming Convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
- Build/Refresh Criteria


Saturday, December 4, 2010

What comprises Test Ware Development : Test Plan - Integration Test Plan

The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections:

- What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical; details but the general approach how the interfaces are triggered.

- Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using the data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.

- List of modules and interface functions
There may be N number of units in the application but the units that are going to communicate with each other, alone are tested in this phase. If the units are designed in such a way that they are manually independent, then the interfaces do not come into picture.This is almost impossible in any system, as the units have to communicate to other units, in order to get different types of functionalities executed. In this section, we need to list the units and for what purpose it talks to the others needs to be mentioned. This will not go into technical aspects, but at a higher level, this has to be explained in plain English.

Apart from above sections, it also includes:
- Integration Testing Tools
- Priority of Program Interfaces
- Naming Convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria
- Build/Refresh criteria.


What comprises Test Ware Development : Test Plan - Unit Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the (ETVX) Entry-Task-Validation-Exit criteria are to be mentioned. Entry means the entry point to that phase. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done.

ETVX is a modeling technique for developing worldly and atomic level models. It is a task based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two type of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks. A purpose is also stated and the viewer of the model may also be defined e.g. to management or customer.

Types of Test Plan


Unit Test Plan (UTP)
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual tester, which contains the following sections:

- What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case, mostly the input units will be tested for the format, alignment, accuracy and the totals.

- Sequence of testing
The sequence of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc.

- Basic functionality of units
The independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level.

Apart from these, the following sections are also addressed:
- Unit testing tools
- Priority of program units
- Naming convention for test cases
- Status reporting mechanism
- Regression test approach
- ETVX criteria


Friday, December 3, 2010

What comprises Test Ware Development : Test Strategy Continued...

Test ware development is the key role of the testing team. Test ware comprises of:

Test Strategy


Before starting any testing activities, the team lead will have to think a lot and arrive at a strategy.The following areas are addressed in the test strategy document:
- Test Groups: From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. We need to identify the test groups based on the functionality aspect.

- Test Priorities: Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority levels must be clearly stated.

- Test Status Collections and Reporting:
When test cases are executed, the test leader and the project manager must know where exactly we stand in terms of testing activities. To know where we stand, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed and how many failed etc. Also, how often we collect the status is to be clearly mentioned.

- Test Records Maintenance: When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location.This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories.

- Requirements Traceability Matrix: Ideally, each software developed must satisfy the set of requirements completely. So, right from design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. In this matrix, the rows will have the requirements. For every document, there will be a separate column. So, in every cell, we need to state what section in HLD addressed in every single document, all the individual cells must have valid section ids or names filled in.

- Test Summary: The senior management may like to have a test summary on a weekly or monthly basis. If the project is very critical, they may need it on a daily basis also. It addresses what kind of test summary reports will be produced for the senior management along with the frequency.


Thursday, December 2, 2010

What comprises Test Ware Development : Test Strategy

Test ware development is the key role of the testing team. Test ware comprises of:

Test Strategy


Before starting any testing activities, the team lead will have to think a lot and arrive at a strategy. This will describe the approach, which is to be adopted for carrying out test activities including the planning activities. This is a formal document and the very first document regarding the testing area and is prepared at a very early stage in software development life cycle. This document must provide generic test approach as well as specific details regarding the project.

The following areas are addressed in the test strategy document:
- Test Levels: The test strategy must talk about what are the test levels that will be carried out for that particular project. Unit, Integration and system testing will be carried out in all projects. But many times, integration and system testing may be combined.

- Roles and Responsibilities: The roles and responsibilities of test leader, individual testers, project manager are to be clearly defined at a project level. the review and approval mechanism must be stated here for test plans and other test documents. Also, we have to state who reviews the test cases, test records and who approved them.The documents may go through a series of reviews or multiple approvals and all these are mentioned in this section.

- Testing tools: Any testing tools which are to be used in different test levels must be clearly identified. This includes justifications for the tools being used in that particular level also.

- Risks and Mitigation: Any risks that will affect the testing process must be listed along with the mitigation. By documenting the risks in this document, we can anticipate the occurrence of it well ahead of time and then we can proactively prevent it from occurring. Sample risks are dependency of completion of coding, which is done by sub-contractors, capability of testing tools etc.

- Regression Test Approach: When a particular problem is identified, the programs will be debugged and the fix will be done to the program. To make sure that the fix works, the program will be tested again for that criteria. Regression test will make sure that one fix does not create some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated again, to make sure that nothing else is affected by a particular fix. how this is going to be carried out is elaborated in this section.


Wednesday, December 1, 2010

Understanding Rapid Testing and Rapid Testing Practice

Rapid testing is the testing software faster than usual without compromising on the standards of quality. It is the technique to test as thorough as reasonable within the constraints. This technique looks at testing as a process of heuristic inquiry and logically speaking it should be based on exploratory testing techniques.

Although most projects undergo continuous testing, it does not usually produce the information required to deal with the situations where it is necessary to make an instantaneous assessment of the product's quality at a particular moment. In most cases the testing is scheduled for just prior to launch and conventional testing techniques often cannot be applied to software that is incomplete or subject to constant change.

It can be said that rapid testing has a structure that is built on a foundation of four components namely:
- People
- Integrated test process.
- Static Testing
- Dynamic testing
There is need for people who can handle the pressure of tight schedules. They need to be productive contributors even though the early phases of the development life cycle. It should also be noted that dynamic testing lies at the heart of the software testing process, and the planning, design, development, and execution of dynamic tests should be performed well for any testing process to be efficient.

Rapid Testing Practice
It would help us if we scrutinize each phase of a development process to see how the efficiency, speed, quality of testing can be improved keeping in mind:
- Actions that the test team can take to prevent defects from escaping.
- Actions that the test team can take to manage risk to the development schedule.
- The information that can be obtained from each phase so that the test team can speed up activities.
If a test process is designed around the answers to these factors, both the speed
of testing and quality of the final product should be enhanced.

Some of the aspects that can be used while rapid testing are:
- Test for link integrity.
- Test for disabled accessibility.
- Test for default settings.
- Check the navigation.
- Check for input constraints by injecting special characters at the sources of data.
- Run multiple instances.
- Check for interdependencies and stress them.
- Test for consistency of design.
- Test for compatibility.
- Test for usability.
- Check for the possible variability's and attack them.
- Go for possible stress and load tests.


Tuesday, November 30, 2010

What are different kind of application programming interface testing tools?

There are many testing tools available. Depending on the level of testing required, different tools could be used. Some of the API testing tools available are:

- JVerify: This is from Man Machine Systems. JVerify is a java class/API testing tool that supports a unique invasive testing model. The invasive model allows access to the internals of any Java object from within a test script. The ability to invade class internals facilitates more effective testing at class level, since controllability and observability are enhanced. This can be very valuable when a class has not been designed for testability.

- JavaSpec: JavaSpec is a SunTest's API testing tool. It can be used to test java applications and libraries through their API. JavaSpec guides the users through the entire test creation process and lets them focus on the most critical aspects of testing. Once the user has entered the test data and assertions, JavaSpec automatically generates self-checking tests, HTML, test documentation, and detailed test reports.

To automate API testing, the assumptions are as follows:
- Test engineer is supposed to test some API.
- The APIs are available in form of library(.lib).
- Test engineer has the API document.

There are mainly two things to test in API testing:
Black box testing of the APIs :
In this, we have to test the API for outputs. When we give a known input then we also know the ideal output. So, we have to check for the actual output against the idle output.
A simple program in C can be written as:
- Take the parameters from text file.
- Call the API with these parameters.
- Match the actual and idle output and also check the parameters for good values that are passed with reference.
- Log the result.

Interaction/ integration testing of the APIs.
Suppose there are teo APIs say
Handle h = handle createcontext(void);
When the handle to the device is to be closed then the corresponding function
Bool bishandledeleted = bool deletecontext(handle &h);

Here, we have to call the two APIs and check if they are handled by the created createcontext() and are deleted by the deletecontext().
This will ensure that these two APIs are working fine.For this we can write a simple C program that will do the following:
- Call the two APIs in the same order.
- Pass the output parameter of the first as the input of the second.
- Check for the output parameter of the second API.
- Log the result.


Monday, November 29, 2010

Step 4 To test API : Call Sequencing, Step 5 To Test API : Observe the output

Step 4: Call Sequencing
When combinations of possible arguments to each individual call are unmanageable, the number of possible call sequences is infinite. Parameter selection and combination issues further complicate the problem call-sequencing problem. Faults caused by improper call sequences tend to give rise to some of the most dangerous problems in software. Most security vulnerabilities are caused by the execution of some such seemingly improbable sequences.

Step 5: Observe the output
The outcome of an execution of an API depends upon the behavior of that API, the test condition and the environment. The outcome of an API can be at different ways i.e. some could generally return certain data or status but for some of the APIs. It might not return or shall be just waiting for a period of time, triggering another event, modifying certain resource and so on.

The tester should be aware of the output that needs to be expected for the API under test. The outputs returned for various input values like valid/invalid, boundary values etc needs to be observed and analyzed to validate if they are as per the functionality. All the error codes returned and exceptions returned for all the input combinations should be evaluated.


Friday, November 26, 2010

Step 3 To test API : Identify the combination of parameters

Identify the combination of parameters
Parameter combinations are extremely important for exercising stored data and computation. In API calls, two independently valid values might cause a fault when used together which might not have occurred with the other combinational values. Therefore, a routine called with two parameters requires selection of values for one based on the value chosen for the other. Often the response of a routine to certain data combinations is incorrectly programmed due to the underlying complex logic.

The API needs to be tested taking into consideration the combination of different parameter. The number of possible combinations of parameters for each call is typically large. For a given set of parameters, if only the boundary values have been selected, the number of combinations, while relatively diminished, may still be prohibitively large. For example, consider an API which takes three parameters as input. The various combinations of different values for the input values and their combinations needs to be identified.

Parameter combination is further complicated by the function overloading capabilities of many modern programming languages. It is important to isolate the differences between such functions and take into account that their use is context driven. The APIs can also be tested to check that there are no memory leaks after they are called. this can be verified by continuously calling the API and observing the memory utilization.


Thursday, November 25, 2010

Step 1 and Step 2 To test API : Identify the initial condition and Input/Parameter Selection

STEP 1: Identify the initial condition
The testing of an application programming interface (API) would depend largely on the environment in which it is to be tested. Hence, initial condition plays a very vital role in understanding and verifying the behavior of the API under test. the initial conditions for testing APIs can be classified as:

- Mandatory Pre-setters
The execution of an API would require some minimal state, environment. These type of initial conditions are classified under the mandatory initialization for the API. For example, a non-static member function API requires an object to be created before it could be called. This is an essential activity required for invoking the API.

- Behavioral Pre-setters
To test the specific behavior of the API, some additional environmental state is required. These types of initial conditions are called the behavioral pre-setters category of initial condition. These are optional conditions required by the API and need to be set before invoking the API under test thus influencing its behavior. Since these influence the behavior of the API under test, they are considered as additional inputs other than the parameters.

Thus, to test any application programming interface, the environment required should also be clearly understood and set up. Without these criteria, API under test might not function as required and leave the tester's job undone.

STEP 2: Input/Parameter Selection
the list of valid input parameters need to be identified to verify that the interface actually performs the tasks that it was designed for. While there is no method that ensures this behavior will be tested completely, using inputs that return quantifiable and verifiable results is the next big thing. The techniques like boundary value analysis and equivalence partitioning need to be used while trying to consider the input parameter values. The boundary values or limits that would lead to errors or exceptions needs to be identified.

It would also be helpful if the data structures and other components that use these data structures apart from the API are analyzed. The data structure can be loaded by using the other components and the API can be tested while the other component is accessing these data structures.

The availability of the source code to the testers would help in analyzing the various input values that could be possible for testing the API. It would also help in understanding the various paths which could be tested. Therefore, not only are testers required to understand the calls, but also all the constants and data types used by the interface.


Wednesday, November 24, 2010

What is the strategy that is needed to test an API?

By analyzing the problems faced by the testers, a strategy needs to be formulated for testing the application programming interface.
- The API to be tested would require some environment for it to work. Hence, it is required that all the conditions and prerequisites understood by the tester.
- The next step would be to identify and study its points of entry. The graphical user interfaces would have items like menus, buttons, check boxes, and combo lists that would trigger the event or action to be taken.
- Similarly, for APIs, the input parameters, the events that trigger the API would act as the point of entry. Subsequently, a chief task is to analyse the points of entry as well as significant output items. The input parameters should be tested with the valid and invalid values using strategies like the boundary value analysis and equivalence partitioning.
- The fourth step is to understand the purpose of the routines, the contexts in which they are to be used. Once all this parameter selections and combinations are designed, different call sequences need to be explored.

The steps can be summarized as follows:
- Identify the initial conditions required for testing.
- Identify the parameters i.e. choosing the values of individual parameters.
- Identify the combination of parameters i.e. pick out the possible and applicable parameter combination with multiple parameters.
- Identify the order to make the calls i.e. deciding the order in which to make the calls to force the API to exhibit its functionality.
- Observe the output.


How can the testing of API calls be done?

Testing of API calls can be done in isolation or in sequence to vary the order in which the functionality is exercised and to make the API produce some useful results from these tests. Designing tests is essentially designing sequences of API calls that have a potential of satisfying the test objectives.This in turn boils down to designing each call with specific parameters and to building a mechanism for handling and evaluating return values.
Designing of test cases depends on the following criteria:
- What value should a parameter take?
- What values together make sense?
- What combination of parameters will make APIs work in a desired manner?
- What combination will cause a failure, a bad return value, or an anomaly in the operating environment?
- Which sequences are the best candidates for selection?

Some of the interesting problems for testers are:
- Ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. This includes assigning common parameter values as well as exploring boundary conditions.
- Generating interesting parameter value combinations for calls with two or more parameters.
- Determining the content under which an API call is made. This might include setting external environment conditions like files, peripheral devices and also the internal stored data that affect the API.
- Sequencing API calls to vary the order in which the functionality is exercised and to make the API produce useful results from successive calls.


Tuesday, November 23, 2010

Understanding Application Programming Interfaces(APIs)

Application Programmable Interfaces (APIs) are collections of software functions or procedures that can be used by other applications to fulfill their functionality. APIs provide an interface to the software component.These form the critical elements for the developing the applications and are used in varied applications from graph drawing packages, to speech engines, to web-based airline reservations systems, to computer security components.

Each API is supposed to behave the way it is coded i.e. it is functionality specific. These APIs may offer different results for different type of the input provided. The errors or the exceptions returned may also vary. However, once integrated within a product, the common functionality testing/integration testing may cover only those paths. By considering each API as a black box, a generalized approach of testing can be applied. But, there may be some paths which are not tested and lead to bugs in the application. Applications can be viewed and treated as APIs from a testing perspective.

The distinctive attributes that make testing of APIs slightly different from testing other common software interfaces like GUI testing.
- Testing APIs requires a thorough knowledge of its inner workings: Some APIs may interact with the operating system kernel, other APIs, with other software to offer their functionality. Thus, an understanding of the inner workings of the interface would help in analyzing the call sequences and detecting the failures caused.
- Adequate programming skills: API tests are in form of sequences of calls, namely, programs.Each tester must possess expertise in the programming language that are targeted by the API.
- Lack of domain knowledge: Involve the testers from the initial stage of development. This would help the testers to have some understanding on the interface and avoid exploring while testing.
- No documentation: Without the documentation, it is difficult for the test designer to understand the purpose of calls, the parameter types and possible valid/invalid values, their return values, the calls it makes to other functions and usage scenarios. Hence, proper documentation would help test designer design the tests faster.
- Access to source code: The availability of the source code would help tester to understand and analyze the implementation mechanism used; and can identify the loops or vulnerabilities that may cause errors.
- Time constraints: Thorough testing of APIs is time consuming, requires a learning overhead and resources to develop tools and design tests.


Monday, November 22, 2010

How to define a practice for agile testing ?

Practice for agile testing should encompass the following features:
Conversational Test Creation
- Test case writing should be a collaborative activity including majority of the entire team. As the customers will be busy, we should have someone representing the customer.
- Defining tests is a key activity that should include programmers and customer representatives.
- It should not be done alone.

Coaching Tests
- It is a way to think about acceptance tests.
- It turns user stories into tests.
- Tests should provide goals and guidance, instant feedback and progress measurement.
- Tests should be in specified in a format that is clear enough that users or customers can understand and that is specific enough that it can be executed.
- Specification should be done by example.

Providing Test Interfaces
- Developers are responsible for providing the fixtures that automate coaching tests.
- In most cases, extreme programming teams are adding test interfaces to their products, rather than using external test tools.

Exploratory Learning
- Plan to explore, learn and understand the product with each iteration.
- Look for bugs, missing features and opportunities for improvement.
- We do not understand software until we have used it.


What are the basic components of Extreme Programming (XP) ?

The basic components of Extreme Programming (XP) are:

Test-First Programming
- Developers write unit tests before coding. It has been noted that this kind of approach motivates the coding, speeds coding and also improves design results in better designs.
- It supports a practice called re-factoring.
- Agile practitioners prefer tests to text for describing system behavior. Tests are more precise than human language and they are also a lot more likely to be updated when the design changes. How many times have you seen design documents that no longer accurately described the current workings of the software? Out-of-date design documents look pretty much like up-to-date documents. Out-of-date tests fall.
- Many open source tools like xUnit have been developed to support this methodology.

Refactoring
- It is the practice changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.
- Traditional development tries to understand how all the code will work together in advance. This is the design. With agile methods, this difficult process of imagining what code might look like before it is written is avoided. Instead, the code is restructured as needed to maintain a coherent design.Frequent refactoring allows less up-front planning of design.
- Agile methods replace high level design with frequent re-design. It also requires a way of ensuring whether the behavior was not changed inadvertently. That's where the tests come in.
- Make the simplest design that will work and add completely only when needed and re-factor as necessary.
- Re-factoring requires unit tests to ensure that design changes do not break existing code.

Acceptance Testing
- Make up the user experiences or user stories which are short descriptions of the features to be coded.
- Acceptance tests verify the completion of user stories.
- Ideally, they are written before coding.


Saturday, November 20, 2010

How much testing is relevant in an agile scenario?

Testing is as relevant in an agile scenario if not more than a traditional software development scenario. Testing is the headlight of the agile project showing where the project is standing now and the direction it is headed. Testing provides the required and relevant information to the teams to take informed and precise decisions. The testers in agile frameworks get involved in much more than finding software bugs, anything that can bug the potential user is a issue for them but testers do not make the final call, it is the entire team that discusses over it and takes a decision over a potential issues.
A firm belief of agile practitioners is that any testing approach does not assure quality, it's the team that does or does not do it, so there is a heavy emphasis on the skill and attitude of the people involved.
Agile testing is not a game of gotcha, it's about finding ways to set goals rather than focus on mistakes.
Among the agile methodologies, XP i.e. Extreme Programming components are:
- Test First Programming
- Pair Programming
- Short iterations and release.
- Re-factoring
- User Stories
- Acceptance Testing

Test-First Programming


- Developers write unit tests before coding. It has been noted that this kind of approach motivates the coding, speeds coding and also improves design results in better designs.
- It supports a practice called re-factoring.
- Agile practitioners prefer tests to text for describing system behavior. Tests are more precise than human language and they are also a lot more likely to be updated when the design changes. How many times have you seen design documents that no longer accurately described the current workings of the software? Out-of-date design documents look pretty much like up-to-date documents. Out-of-date tests fall.
- Many open source tools like xUnit have been developed to support this methodology.


Friday, November 19, 2010

Understanding Agile Testing and Agile Development Methodologies

First Definition: Agile testers treat the developers as their customer and follow the agile manifesto. The Context driven testing principles act as a set of principles for the agile tester.
Second Definition: It can also be treated as the testing methodology followed by testing team when an entire project follows agile methodologies.

Traditional QA seems to be totally at loggerheads with the agile manifesto in the following regard where process and tools are a key part of QA and testing, QA people seem to love documentation, QA people want to see the written specification and where is testing without a plan.
The question arises is there a role for QA in agile projects or not? The answer is maybe but the roles and tasks are different.

The context driven principles guidelines for the agile tester are:
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
- People, working together, are the most important part of any project's context.
- Projects unfold over time in ways that are often not predictable.
- The product is a solution. If the problem is not solved, the product does not work.
- Good software testing is a challenging intellectual process.
- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right times to effectively test our products.

In the second definition, we described agile testing as a testing methodology adopted when an entire project follows agile development methodology.
Some agile development methodologies that are being practiced currently are extreme programming (XP), crystal, adaptive software development (ASD), scrum, feature driven development (FDD), dynamic systems development method (DSDM) and Xbreed.


Wednesday, November 17, 2010

Understanding Scenario Based Testing

Scenario based tests (SBT) are best suited when you need to tests need to concentrate on the functionality of the application than anything else.
Suppose, you are testing an application which is quite old and it is a banking application. This application has been built based on the requirements of the organization for various banking purposes. Now, this application will have continuous upgrades in the working.
Let us assume that the application is undergoing only functional changes and not the user interface changes. The test cases should be updated for every release. Over a period of time, maintaining the test ware becomes a major set back. The Scenario based tests would help you there.
As per the requirements, the base functionality is stable and there are no user interface changes. There are only changes with respect to the business functionality. As per the requirements and the situation, it is clearly understood that only regression tests need to be run continuously as a part of testing phase. Over a period of time, the individual test cases would become difficult to manage. This is the situation where we use scenarios for testing.
To derive scenarios, the following can be used as a basis:
- From the requirements, list out all the functionality of the application.
- Using a graph notation, draw depictions of various transactions which pass through various functionality of the application.
- Convert these depictions into scenarios.
- Run the scenarios when performing the testing.

Scenario based tests are not only for legacy application testing, but for any application which requires you to concentrate more on the functional requirements. I f you can plan out a perfect test strategy, then the scenario based tests can be used for any application testing for any requirements. Scenario based tests will be a good choice with a combination of various test types and techniques when you are testing projects which adopt UML (Unified Modeling Language) based development strategies.


Sunday, November 14, 2010

Who is a good exploratory tester and maintain a balance between scripted and exploratory testing.

Exploratory testing approach relies a lot on the tester alone. The tester actively controls the design of tests as they are performed and uses the information gained to design new and better ideas.
A good exploratory tester should
- have the ability to explain his work.
- he should have the ability to design good tests, execute them and find important problems.
- he should document his ideas and use them in later cycles.
- he should be a careful observer.
- he should be a critical thinker.
- he should have diverse ideas so as to make new test cases and improve existing ones.
- he should remain alert for new opportunities.

Exploratory Testing is advantageous when :
- rapid testing is necessary.
- test case development time is not available.
- need to cover high risk areas with more inputs.
- need to test software with little knowledge about the specifications.
- develop new test cases or improve the existing.

The drawbacks in exploratory testing includes that it is difficult to quantize and a skilled tester is required.
Exploratory testing relies on the tester and the approach he proceeds with. Pure scripted testing does not undergo much change with time and hence the power fades away. In test scenarios, where repeatability of tests is required, automated scripts have an edge over exploratory approach. Hence, it is important to achieve a balance between the two approaches and combine the two to get the best of both.


Saturday, November 13, 2010

Mission of testing and what are the situations where exploratory testing can be practiced?

The goal of testing needs to be understood first before the work begins. This could be the overall mission of the test project or could be a particular functionality or scenario. The mission is achieved by asking the right questions about the product, designing tests to answer these questions and executing tests to get the answers. Quite often, the tests do not completely answer. In such cases, we need to explore. The test procedure is recorded and the result status too.

The tester also needs to have a general plan in mind, though may not be very constrained. The tester needs to have the ability to design good test strategy, execute good tests, find important problems and report them. Out of box thinking is necessary.

The time available for testing is a critical factor. Time falls short due to following reasons:
- In a project life cycles, the time and resources required in creating the test strategy, test plan and design, execution and reporting is overlooked. Exploratory testing becomes useful since the test plan, design and execution happen together.
- Time falls short when testing is essential on a short period of notice.
- Time falls short when a new feature is implemented.
- Time falls short when change request come in much later stage of the cycle when much of the testing is done.

In such situations, exploratory testing becomes very useful. A basic strategy of exploratory testing is to have a general plan of attack but also allows yourself to deviate from it in a short period of time. In a session of exploratory testing, a set of test ideas, written notes and bug reports are the results. This can be reviewed by the test lead or a test manager.
- It is very important to identify the test strategy and the scope of the test carried. It is dependent on the project approach to testing.
- The tester crafts the test by systematically exploring the product. He defines his approach, analyze the product, and evaluate the risk.
- The written notes and scripts of the tester are reviewed by the test lead or manager.


Friday, November 12, 2010

Where does Exploratory Testing fit with its advantages and disadvantages.

Exploratory testing is called for in any situation where it is not obvious what the next test should be, or when you want to go beyond the obvious tests. More specifically, freestyle exploratory testing fits in any of the following situations:
- when you need to provide rapid feedback on a new product or feature.
- when you need to learn the product quickly.
- when you want to investigate and isolate a particular defect.
- when you want to check the work of another tester by doing a brief independent investigation.
- when you have to find out the single most important bug in the shortest time.
- when you have already tested using scripts, and seek to diversify the testing.
- when you want to investigate and isloate a particular defect.
- when you want to investigate the status of a particular risk in order to evaluate the need for scripted tests in that area.

Advantages of Exploratory Testing


- It does not require extensive documentation.
- It is responsive to changing scenarios.
- Under tight schedules, testing can be more focused depending on the bug rate or risks.
- Improved coverage.

Disadvantages of Exploratory Testing


- Dependent on tester's skills.
- Test tracking not concrete.
- More prone to human error.
- No contingency plan if the tester is unavailable.

What specifics affect Exploratory testing


- The mission of the particular test session.
- The tester skills, talents, preferences.
- Available time and other resources.
- Status of other testing cycles for the product.
- How much the tester knows about the product.


Thursday, November 11, 2010

Defect Driven Exploratory Testing - Formalized Approach for Exploratory Testing

Defect driven exploratory testing is another formalized approach used for exploratory testing. It is a goal oriented approach focused on the critical areas identified on the defect analysis study based on procedural testing results.
In procedural testing, the tester executes readily available test cases, which are written based the requirement specifications. Although the test cases are executed completely, defects were found in the software while doing exploratory testing by just wandering through the product blindly. A reliable basis was needed for exploring the software. Thus, defect driven exploratory testing is an idea of exploring that part of the product based on the results obtained during procedural testing. After analyzing the defects found during defect driven exploratory testing process, it was found that these were the most critical bugs, which were camouflaged in the software and which if present could have made the software not fit for use.

There are some pre-requisites for defect driven exploratory testing:
- In-depth knowledge of the product.
- Procedural testing has to be carried out.
- Defect analysis based on scripted tests.

Advantages of defect driven exploratory testing:
- Tester has clear clues on the areas to be explored.
- Goal oriented approach, hence better results.
- No wastage of time.


What are some of the formal approaches used for exploratory testing? Continued...

Charter states the goal and the tactics to be used.


A charter can be simple one to more descriptive giving the strategies and outlines for the testing process.
Charter summary contains
- Architectural the charters i.e. test planning.
- Brief information or guidelines on:
1.)Mission: Why do we test this?
2.)What should be tested?
3.)How to test?
4.)What problems to look for?
5.)tools to use
6.)Specific test techniques or tactics to use
7.)What risks are involved?
8.)Documents to examine
9.)Desired output from testing

Session Based Test Management(SBTM)


Session based test management is a formalized approach that uses the concept of charters and the sessions for performing the exploratory testing. A session is not a test case or bug report. It is the reviewable product produced by chartered and uninterrupted test effort. A session can last from 60 to 90 minutes but there is no hard and fast rule on the time spent for testing. If a session lasts closer to 45 minutes, we call it a short session. If it lasts closer to two hours, we call it a long session. Each session designed depends on the tester and the charter. After the session is completed, each session is de-briefed. The primary objective in the de-briefing is to understand and accept the session report, provide feedback and coaching to the tester. The de-briefings should help the manager to plan the sessions in future and also to estimate the time required for testing the similar functionality.
The de-briefing session is based on agenda called PROOF.
Past: What happened during the session?
Results: What was achieved during the session?
Outlook: What still needs to be done?
Obstacles: What got in the way of good testing?
Feeling: How does the tester feel about all this?

A session can be broadly classified into three tasks:
- Session test up: Time required in setting up the application under test.
- Test Design and execution: time required scanning the product and test.
- Bug investigation and reporting: time required finding the bugs and reporting to the concerned.

The entire session report consists of session charter, tester name, data and time started, task breakdown, data files, test notes, issues, bugs.


Wednesday, November 10, 2010

What are some of the formal approaches used for exploratory testing? Continued...

Approaches that are used for exploratory testing are:
- Record Failures
In exploratory testing, testing is done without having any documented test cases. If a bug is found, it is very difficult for us to test it after fix. This is because there are no documented steps too navigate to that particular scenario. Hence, we need to keep the track of the flow required to reach where a bug has been found. So while testing, it is important that at least the bugs that have been discovered are documented. By recording failures, we are able to keep track of work that has been done. This would also help even if the tester who was actually doing exploratory testing is not available. Since the document can be referred and list all the bugs that have been reported as well the flows for the same can be identified.

- Document issues and questions
The tester trying to test an application using exploratory testing methodology should feel comfortable to test. Hence, it is advisable that the tester navigates through the application once and notes any ambiguities or queries he might feel. He can even get the clarification on the work-flows he is not comfortable. Hence by documenting all the issues and questions that have been found while scanning or navigating the application can help the tester have testing done without any loss in time.

- Decompose the main task into smaller tasks. The smaller ones to still smaller activities
It is always easier to work with the smaller tasks when compared to large tasks. This is very useful in performing exploratory testing because lack of test cases might lead us to different routes. By having a smaller task, the scope as well as the boundary are confined which will help the tester to focus on his tetsing and plan accordingly.
If a big task is taken up for testing, as we explore the system, we might get deviated from our main goal or task. It might be hard to define boundaries if the application is a new one. With smaller tasks, the goal is known and hence the focus and the effort required can be properly planned.


Tuesday, November 9, 2010

What are some of the formal approaches used for exploratory testing? Continued...

Some of the formal approaches used for exploratory testing are:

- Identify the break points
Break points are the situations where the system starts behaving abnormally. It does not give the output it is supposed to give. So, by identifying such situations also, testing can be done. Use boundary values or invariance for finding the break points of the application. In most of the cases, it is observed that system would work for normal inputs and outputs. Try to give input that might be the ideal situation or the worse situation. By trying to identify the extreme conditions or the breakpoints would help the tester to uncover the hidden bugs. Such cases might not be covered in the normal scripted testing. hence, this helps in finding the bugs which might not be covered in normal testing.

- Check the UI against Windows Interface etc standards
The exploratory testing can be performed by identifying the user interface standards. There are set standards laid down for the user interfaces that need to be developed. These user standards are nothing but the look and feel aspects of the interfaces, the user interacts with. The user should be comfortable with any of the screens that he or she is working on. These aspects help the end user to accept the system faster. By identifying the user standards, define an approach to test because the application developed should be user friendly for the user's usage.

- Identify expected results
The tester should know what he is testing for and expected output for the given input. Until and unless, the aim of the testing is not known, there is no use of the testing that is done because the tester may not succeed in distinguishing the real error and normal work-flow. The tester needs to analyze what is the expected output for the scenario he is testing.

- Identify the interfaces with other interfaces/external applications
In the age of component development and maximum re-usability, developers try to pick up the already developed components and integrate them. In some cases, it would help the tester explore the areas where the components are coupled. The output of one component should be correctly sent to other component. Hence, such scenarios or work-flows need to be identified and explored more. There may be external interfaces, like the application is integrated with another application for the data. In such cases, focus should be more on the interface between the two applications.


Monday, November 8, 2010

What are some of the formal approaches used for exploratory testing?

Some of the formal approaches used for exploratory testing are:

- Identify the domain
The exploratory testing can be performed by identifying the application domain. If the tester has good knowledge of domain, the it would be easier to test the system without having any test cases. If the tester were well aware of the domain, it would help analyzing the system faster and better. His knowledge would help in identifying the various workflows that usually exist in the domain. He would also be able to decide what are the different scenarios and which are most critical for that system. Hence, he can focus his testing depending on the scenarios required. If a QA lead is trying to assign the tester to a task, it is advisable that the tester identifies the person who has the domain knowledge of that testing for exploratory testing.

- Identify the purpose
Another approach to exploratory testing is by identifying the purpose of the system i.e. What is that system used for. Thus, by identifying the primary and secondary functions for the system, testing can be done where more focus and effort can be given to primary functions as as compared to secondary functions.

- Identify the workflows
Identifying the workflows for testing any system without any scripted test cases can be considered as one of the best approaches used. The workflows are nothing but a visual representation of the scenarios as the system would behave for any given input. The workflows can be simple flow charts or data flow diagrams or something like state diagrams, use cases, models etc. The workflows will also help to identify the scope for that scenario. The workflows would help the tester to keep track of the scenarios for testing. It is suggested that the tester navigates through the application before he starts exploring. It helps the tester in identifying the various possible workflows and issues any found which he is comfortable can be discussed with the concerned team.


Facebook activity