Subscribe by Email


Thursday, August 23, 2007

Some definitions of regression testing and what it means ..

What is regression testing?

Regression is a type of software testing that is used for discovering new errors and bugs in the software system. In terminology of regression testing these bugs are called regressions. Regression testing is carried out for the already existing areas of the system whether functional or non – functional. This testing is carried out after any change has been made in the configurations or patches. The basic purpose of regression testing is ensuring that any change or modification made to the software does not affect its functionality or introduce some other new errors. It also checks whether a change in one part of the system had any effect on the other parts of the system. Some common methods of carrying out regressing testing are:

- Re – executing the tests that have already been executed.
- Checking if any change has occurred in the working of the software.
- Checking if the errors corrected earlier have appeared.

Regression testing as such is very time consuming since you have to test everything again and again, but can be performed effectively by selecting a certain number of tests. The number of tests selected is sufficient enough to cover the whole unit in which the change is made. According to a research, it has been found that making a change in one part of the software often introduces errors in other parts. In some of the cases the same errors reoccur because the fix that was made earlier to prevent them gets lost. This happens probably because of errors humans make while revising the code i.e., because of the poor revision practices. Often the fix that we apply for a problem is fragile that it is easily breakable and can get lost. Or sometimes this fix for one part of the software can cause an error in another part.
In some other cases it might happen that during redesigning, the same mistakes might be made that were made during the designing of the original software. Therefore mostly regression testing is considered to be a good practice. It helps in locating and fixing the bug, recording the tests which led to the discovery of the bugs, repeated execution of the tests at regular intervals after some modification has been done in the program. You might be thinking that this can be done even by following the manual testing procedure by means of some programming techniques. But this is better carried out with automated testing as it takes less effort and time when compared to the manual testing.
An automated testing suite consists of various software tools that allow for the automatic execution of the test cases and generate reports. The programmers might set up such automated testing systems for automatically doing regression testing at regular intervals of time. This way the regression tests can be run either after every compilation, once a week or even every night. There are various tools available for automated regression testing such as follows:
- TinderBox
- BuildBot
- Hudson
- Jenkins
- Bamboo
- TeamCity

Regression testing is closely associated with extreme programming as the former is an inseparable part of the latter. Here at each phase of the software development cycle, the entire software package is put through repeatable, extensive automated testing. The software quality assurance team is responsible for performing the regression testing after the development is over. But fixing the bugs at this stage comes very expensive. This potential problem is fixed at an earlier stage by means of unit testing. The test cases written by the programmers are for the verification of the intended outcomes i.e., they are either unit tests or functional tests. 


Tuesday, August 21, 2007

Advantages and Disadvantages of White Box Testing

Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.
iv) Forces test developer to reason carefully about implementation


White-box testing is an important method for the early detection of errors during software development. In this process test case
generation plays a crucial role, defining appropriate and error-sensitive test data. White-box testing strategies include designing tests such that every line of source code is executed at least once, or requiring every function to be individually tested. Code coverage is a significant benefit provided by white box testing. It is much easier to determine if you've looked at all functions, libraries, etc, when you know what they all are.


Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
iii) Not looking at the code in a runtime environment. That's important for a number of reasons. Exploitation of a vulnerability is dependent upon all aspects of the platform being targeted and source code is just of those components. The underlying operating system, the backend database being used, third party security tools, dependent libraries, etc. must all be taken into account when determining exploitability. A source code review is not able to take these factors into account.
iv) Very few white-box tests can be done without modifying the program, changing values to force different execution paths, or to generate a full range of inputs to test a particular function.
v) Miss cases omitted in the code


What is White Box Testing ?

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc. White box testing is testing from the inside--tests that go in and test the actual program structure.
In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box testing also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.
While white box testing is applicable at the unit, integration and system levels of the software testing process, it's typically applied to the unit. So while it normally tests paths within a unit, it can also test paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover an overwhelming number of test cases, it might not detect unimplemented parts of the specification or missing requirements. But you can be sure that all paths through the test object are executed.
For a further definition, White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

Typical white box test design techniques include:
* Control flow testing
* Data flow testing


Orthogonal Array Testing Strategy (OATS)

The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions. It provides representative (uniformly distributed) coverage of of all variable pair combinations. This makes the technique particularly useful for integration testing of software components (especially in OO systems where multiple subclasses can be substituted as the server for a client). It is also quite useful for testing combinations of configurable options (such as a web page that
lets the user choose the font style, background color, and page layout).
Orthogonal Array Testing is a statistical testing technique implemented by Taguchi. This method is extremely valuable for testing complex applications and e-comm products. The e-comm world presents interesting challenges for test case design and testing coverage. The black box testing technique will not adequately provide sufficient testing coverage. The underlining infrastructure connections between servers and legacy systems will not be understood by the black box testing team. A gray box testing team will have the necessary knowledge and combined with the power of statistical testing, an elaborate testing net can be set-up and implemented.

The theory:
Orthogonal Array Testing (OAT) can be used to reduce the number of combinations and provide maximum coverage with a minimum number of test cases. OAT is an array of values in which each column represents a variable - factor that can take a certain set of values called levels. Each row represents a test case. In OAT, the factors are combined pair-wise rather than representing all possible combinations of factors and levels.
Orthogonal arrays are two dimensional arrays of numbers which possess the interesting quality that by choosing any two columns in the array you receive an even distribution of all the pair-wise combinations of values in the array.

Why use this technique?
Test case selection poses an interesting dilemma for the software professional. Almost everyone has heard that you can't test quality into a product, that testing can only show the existence of defects and never their absence, and that exhaustive testing quickly becomes impossible -- even in small systems.
However, testing is necessary. Being intelligent about which test cases you choose can make all the difference between (a) endlessly executing tests that just aren't likely to find bugs and don't increase your confidence in the system and (b) executing a concise, well-defined set of tests that are likely to uncover most (not all) of the bugs and that give you a great deal more comfort in the quality of your
software.

The basic fault model that lies beneath this technique is:
1. Interactions and integrations are a major source of defects.
2. Most of these defects are not a result of complex interactions such as "When the background is blue and the font is Arial and the layout has menus on the right and the images are large and it's a Thursday then the tables don't line up properly."
3. Most of these defects arise from simple pair-wise interactions such as "When the font is Arial and the menus are on the right the tables don't line up properly."
4. With so many possible combinations of components or settings, it is easy to miss one.
5. Randomly selecting values to create all of the pair-wise combinations is bound to create inefficient test sets and test sets with random, senseless distribution of values.

OATS provides a means to select a test set that:
1. Guarantees testing the pair-wise combinations of all the selected variables.
2. Creates an efficient and concise test set with many fewer test cases than testing all combinations of all variables.
3. Creates a test set that has an even distribution of all pair-wise combinations.
4. Exercises some of the complex combinations of all the variables.
5. Is simpler to generate and less error prone than test sets created by hand.


Sunday, August 19, 2007

Black Box testing techniques

Black Box Testing is testing technique having no knowledge of the internal functionality/structure of the system. This testing technique treats the system as black box or closed box. Tester will only know the formal inputs and projected results. Tester does not know how the program actually arrives at those results. Hence tester tests the system based on the functional specifications given to him. That is the reason black box testing is also considered as functional testing. This testing technique is also called as behavioral testing or opaque box testing or simply closed box testing. Although black box testing is a behavioral testing, Behavioral test design is slightly different from black-box test design because the use of internal knowledge is not illegal in behavioral testing

Black Box Testing has many types of techniques like:

1. Decision Table

Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way.
Decision tables make it easy to observe that all possible conditions are accounted for. In the example above, every possible combination of the three conditions is given. In decision tables, when conditions are omitted, it is obvious even at a glance that logic is missing. Compare this to traditional control structures, where it is not easy to notice gaps in program logic with a mere glance --- sometimes it is difficult to follow which conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand that a programmer think of all possible conditions. With traditional control structures, it is easy to forget about corner cases, especially when the else statement is optional. Since logic is so important to programming, decision tables are an excellent tool for designing control logic.

2. Equivalence Partitioning

The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
Types of Equivalence Classes
* Continuous classes run from one point to another, with no clear separations of values. An example is a temperature range.
* Discrete classes have clear separation of values. Discrete classes are sets, or enumerations.
* Boolean classes are either true or false. Boolean classes only have two values, either true or false, on or off, yes or no. An example is whether a checkbox is checked or unchecked.
Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.

3. Boundary Value Analysis

Boundary value analysis is a software testing design technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems. Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects.
To set up boundary value analysis test cases you first have to determine which boundaries you have at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together.
The tendency is to relate boundary value analysis more to the so called black box testing which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.
After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component.

4. Use Case Method

A use case is a technique used in software and systems engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary actor—the initiator of the interaction—and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and who (or which) take part in a sequence of activities in a dialogue with the system, to achieve some goal: they may be end users, other systems, or hardware devices. Each use case is a complete series of events, from the point of view of the actor.
Each use case provides one or more scenarios that convey how the actor will interact with the system to achieve a specific business goal or function. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by systems analysts and end users. They are separate and distinct from UML use case diagrams, which allow one to abstractly work with groups of use cases.
A use case should:
* describe how the system shall be used by an actor to achieve a particular goal.
* have no implementation-specific language.
* be at the appropriate level of detail.
* Not include detail regarding user interfaces and screens. This is done in user-interface design.

5. State Transition Tables

In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semiautomaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs. A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation.

6. Cross-functional testing
7. Pairwise testing


Wednesday, August 15, 2007

Definition of testing terms: M - Z

Monkey Testing.(smart monkey testing) Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.

Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is retired. If undetected, the test suite must be revised.

Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

Negative test. A test whose primary purpose is falsification; that is tests designed to brake the software

Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested.

Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [fromBS7925-1]

Parallel Testing Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.[ISO]

Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system.

Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.

Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test

Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing. - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program

Reengineering .The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution.

Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Range Testing. For each input identifies the range over which the system behavior should be the same.

Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails.

Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test.

Specification-based test. A test, whose inputs are derived from a specification.

State-based testing Testing with test cases developed by modeling the system under test as a state machine

State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state.

Static testing. Source code analysis. Analysis of source code to expose potential defects.

Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

Stealth bug. A bug that removes information useful for its diagnosis and correction.

Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them.

Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Table testing. Test access, security, and data integrity of table entries.

Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test Case. A set of test inputs, executions, and expected results developed for a particular objective.

Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.

Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.

Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.

Test Item. A software item which is the object of testing.

Test Log A chronological record of all relevant details about the execution of a test.

Test Plan. A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements

Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.

Test Status. The assessment of the result of running tests on software.

Test Stub A dummy software component or object used (during development and testing) to simulate the behaviour of a real component. The stub typically provides test output.

Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test Tree. A physical implementation of Test Suite.

Testability. Attributes of software that bear on the effort needed for validating the modified software

Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black-box testing methods are combined during unit testing.

Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.

Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.

Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.

Volume testing. Testing where the system is subjected to large volumes of data.

Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object¨Ë†s structure ¡ÃŒthat is, the source code.


Definition of testing terms: D - L

Data flow testing Testing in which test cases are designed based on variable usage within the code.

Database testing. Check the integrity of database field values.

Defect The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

Defect Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.

Depth test. A test case, that exercises some part of a system to a significant level of detail.

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

Dirty testing Negative testing.

Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs

End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.

Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.

Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.

Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Free Form Testing. Ad hoc or brainstorming using intuition to define test cases.

Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Gray box testing Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.

Gray box testing Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior.

Gray box testing Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:

  • A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
  • The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.

High-level tests. These tests involve testing whole, complete products

Inspection A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration The process of combining software components or hardware components or both into overall system.

Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-band approach where all source modules are combined and tested in one step. The big-band approach to integration should be discouraged.

Interface Tests Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

Latent bug A bug that has been dormant (unobserved) in two or more releases.

Lateral testing. A test design technique based on lateral thinking principals, to identify faults.

Load testing Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Load/stress test. A test is design to determine how heavy a load the application can handle.

Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.

Load-isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.


Definition of testing terms

Acceptance Test Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Ad Hoc Testing Testing carried out using no recognised test case design technique.

Alpha Testing Testing of a software product or system conducted at the developer's site by the customer.

Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem

Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks Programs that provide performance comparison for software, hardware, and systems.

Big-bang testing Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.

Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Breadth test. - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail

Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.

Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items.

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification.


Software testing best practices

The objective of testing is to ensure a higher quality for the software product. The better organized the testing is, the more the likelihood that the final product will have fewer defects and better quality. But it is not an easy task to determine how effective the testing is. One way to do that is to benchmark against some best practices, such as the ones being listed below:
These best practices can be broken down into levels, with benchmarking starting at the lowest levels, and after meeting these levels, moving to higher levels. At the end, if there is an impression that you are meeting most of these benchmarks, then there is a higher chance that you will have a very effective testing setup.

Start at the basic level:

  • Start with functional Specifications
  • ŸReviews and Inspection
  • ŸFormal entry and exit criteria
  • ŸFunctional test - variations
  • ŸMulti-platform testing
  • ŸInternal Betas
  • ŸAutomated test execution
  • ŸBeta programs
  • ŸDaily Builds
Functional specification: Enables the quality process of test plan and cases to start as the development process is underway. Further, involvement with the functional specifications deepens the understanding regarding customer workflows. Without functional specifications, the degree of thoroughness required is missing.

Reviews: Software inspection is a very well know efficient technique for debugging code.

Formal entry and exit criteria: Defining a criteria for every step may seem excessive, but it creates a mindset that has already thought through the beginning and end steps for many stages and milestones, and makes declaring milestones a lot easier, with agreement between the Dev and QE teams.

Functional test - variations: This involves varying the sets of input conditions to achieve different output conditions. If this is done in a comprehensive way, then the blackbox testing is said to be close to complete.

Multiple platform testing: Nowadays softwares are available on multiple platforms, and porting from one platform to another results in some changes. It is necessary to define a strategy to figure out the level of testing required across multiple platforms.

Internal Betas: External betas relate to sending software with a certain quality level to testers outside the company. However, if the software development company has a certain size, it can even send the software to its internal folks, providing a good beta testing platform.

Automated test execution: Automation is an integral part of testing processes. Automation can convert replace a fair amount of the manual testing required. In addition, with automation, repetitive testing of software becomes much easier. For example, if you have daily builds, automation can be used to setup a series of test cases that will determine whether the build is usable or not, without human intervention.

Beta Programs: These are a very efficient method of ensuring that there is a wide section of users available to test. They are as close to end users as possible, and provide feedback in terms of functionality, and also a wide coverage. A good example is for a software used for CD / DVD writing, where beta testing will provide coverage on a large number of burning devices, not possible with the testing team.

Daily Builds: Getting a process of daily builds is very useful during the development process. Bugs are turned around faster, new features are available much sooner, and even if a build is not so good, the next one will be available soon. It may seem like overhead, but is very useful.


Saturday, August 11, 2007

What is the Capability Maturity Model (CMM)

What is the Capability Maturity Model (CMM)?

CMM is a framework set by SEI, which provides a general roadmap for process improvement. Software Process Capability describes the range of expected results that can be achieved by following the process. The process capability of an organization determines what can be expected from the organization in terms of quality and productivity. The goal of process improvement is to improve the process capability.

A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. There are five well-defined maturity levels for a software process. These are:

Initial
Repeatable
Defined
Managed
Optimizing

The Initial Process (Level 1) is essentially an ad hoc process that has no formalized method for any activity. Basic project controls for ensuring that activities are being done properly and that the project plan is being adhered to are missing. In crisis the project plans and development processes are abandoned in favor of a code and test type of approach. Success in such organizations depends solely on the quality and capability of individuals. The process capability is unpredictable as the process constantly changes. Organizations at this level can benefit most by improving project management, quality assurance, and change control.

The Repeatable process (Level 2) policies for managing a software project and procedures to implement those policies exist. Project management is well developed in a process at this level. Some of the characteristics of a process at this level are: project commitments are realistic and based on past experience with similar projects, cost and schedule are tracked and problems resolved when they arise, formal configuration control mechanisms are in place, and software project standards are defined and followed. Essentially, results obtained by this process can be repeated as the project planning and tracking is formal.

The Defined process (Level 3) in an organization has standardized on a software process, which is properly documented. A software group exists in the organization that owns and manages the process. In the process each step is carefully defined with verifiable entry and exit criteria, methodologies for performing the step, and verification mechanisms for the output of the step. In this process both the development and management processes are formal.

In the Managed process (Level 4) quantitative goals exist for process and products. Data is collected from software processes, which is used to build models to characterize the process. Hence, measurement plays an important role in a process at this level. Due to the models build, the organization has a good insight in the process and its deficiencies. The results of using such a process can be predicted in quantitative terms.

In the Optimizing process (Level 5), the focus of the organization is on continuous process improvement. Data is collected and routinely analyzed to identify areas that can be strengthened to improve quality or productivity. New technologies and tools are introduced and their effects measured in an effort to improve the performance of the process. Best software engineering and management practices are used throughout the organization.


CMM Model


Software Development Model: Spiral

What is Spiral Model? A brief explanation:

The software development activities can be organized like a spiral that has many cycles. The radial dimension represents the cumulative cost incurred in accomplishing the steps done so far, and the angular dimension represents the progress made in completing each cycle of the spiral.

Each phase begins with identification of objectives for the cycle, the different alternatives that are possible for achieving the objectives, and the constraints that exist. This is the first quadrant of the cycle. The next step in the cycle is to evaluate these different alternatives based on the objectives and constraints. The focus of evaluation in this step is based on the risk perception for the project. Risks reflect chances that some of the objectives of the project may not be met. The next step is to develop strategies that resolve the uncertainties and risks. This step may involve activities such as benchmarking, simulation and prototyping. Next, the software is developed, keeping in mind the risks. Finally the next stage is planned.

For example: In round one, a concept of operation might be developed. The objectives are stated more precisely and quantitatively and the cost and other constraints are defined precisely. The risks here are typically whether or not the goals can be met within the constraints. The plan for the next phase will be developed, which will involve defining separate activities for the project. In round two the top-level requirements are developed. In succeeding rounds the actual development may be done.


You might have heard the term 'Software Quality Factors' before, or maybe you have not heard it before. So, what is it ?

Quality factors are the factors which affect the quality of the software. Quality of a software product has three dimensions; each dimension deals with a set of quality factors:


· Product Operation

o Correctness

o Reliability

o Efficiency

o Integrity

o Usability

· Product Transition

o Portability

o Reusability

o Interoperability

· Product Revision

o Maintainability

o Flexibility

o Testability


More details about verification vs. validation

Verification & Validation

Essentially, the very nature of Quality Testing (QT) is broken into two aspects: verification testing and validation testing. It is thus possible to say:

QT = (Verification + Validation)

IEEE/ANSI defines verification testing as "The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase." This is somewhat vague although verification testing is generally thought of as a proactive type of testing. Verification is, to a large extent, the Quality Control (QC) activities that are done throughout the lifecycle that help to assure that interim product deliverables meet their initial specifications. Verification is also the least understood of the two.

So, for example, if you are "Verifying Functional Design" then you have to consider the result of what made the functional design: the process of translating user requirements into a set of external human interfaces. This gives a functional design specification - it describes what the user can see, but nothing of what they cannot see. So, in verifying that type of document, the goal here is to determine how successfully the user requirements were incorporated into the functional design. The requirements document serves as a source document to verify the functional design. The best practice is to look for unwarranted additions or ommissions. Or consider the process of "Verifying Internal Design." The process by which this document is created is that the functional specification is broken up into a detailed set of internal attributes: data structures, data diagrams, flow diagrams, etc. This gives an internal design specification - it describes how the product has to be built. So, during verification, we can begin looking at limits and constraints with the product - basic boundary conditions. We also learn about performance nodes and possible failure conditions or scenarios.

These tend to be considered proactive activities because you are attempting to catch problems are early as possible and you are verifying if the "right" thing is being done. This is different from validation which does not ask: are we doing the right thing? It asks: are we doing what we said was the right thing? In other words, during verification you are, to a large extent, definining what the "right thing to do" actually is. In validation you are making sure you adhered to your previous definitions. And that means that validation is inherently more reactive in nature than is verification.

The IEEE/ANSI definition of validation is "The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements."

These requirements can refer to a lot of things. The idea of verification is asking whether or not the requirements formulated correctly. The idea of verification is asking whether the product was constructed in accordance with those correctly-formulated requirements. In this sense, validation is usually used to refer to the "test phase" of the lifecycle, which assures that the end product (e.g., system, application, etc.) meets stated (and sometimes implied) specifications. There are, in general, eight validation axioms, as they are usually referred to, and those are given here:

1. Testing can be used to show the presence of errors, but never their absence.
2. One of the most difficult problems in testing is knowing when to stop.
3. Avoid unplanned, non-reusable, throwaway test cases.
4. A necessary part of a test case is a definition of the expected output or result. A comparison should be made of the actual versus the expected result.
5. Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions. "Invalid" is defined as a condition that is outside the set of valid conditions and should be diagnosed as such by the program being tested.
6. Test cases must be written in generate desired output conditions. Do not think just in terms of inputs. Determine the input required to generate a pre-designed set of outputs.
7. A program should not be tested (except in unit and integration phases) by the person or organization that developed it.
8. The number of undiscovered errors is directly proportional to the number of errors already discovered.


Thursday, August 9, 2007

What is blackbox testing ?

Blackboxes are the names used for the devices in planes that are expected to record the happenings in the plane and are useful if the plane crashes. However, the software blackbox is a different concept. It refers to the term of treating the software application as a box, of which you don't know the internals and test only on the basis of inputs and outputs.
There are many definitions of blackbox testing:
Also known as functional testing. A software testing type whereby the internal workings of the item being tested are not known by the person doing the testing. The tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not have details about the programming code and does not need any further knowledge of the program other than its specifications.
When black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. Black box tests are also the only form of test the customer is likely to understand; Therefore, black box testing is absolutely mandatory for acceptance testing. The customer must be able to understand these tests, so that he/she will know for sure whether or not you've met the contract requirements.
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Now a days, it is becoming common to route the Testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer.
In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action


Facebook activity