Monkey  Testing.(smart monkey testing) Input are generated from  probability distributions that reflect actual expected usage statistics -- e.g.,  from user profiles. There are different levels of IQ in smart monkey testing. In  the simplest, each input is considered independent of the other inputs. That is,  a given test requires an input vector with five components. In low IQ testing,  these would be generated independently. In high IQ monkey testing, the  correlation (e.g., the covariance) between these input distribution is taken  into account. In all branches of smart monkey testing, the input is considered  as a single event.
  Maximum  Simultaneous Connection testing. This  is a test performed to determine the number of connections which the firewall or  Web server is capable of handling.
  Mutation  testing.  A testing strategy where small variations to a program are inserted (a mutant),  followed by execution of an existing test suite. If the test suite detects the  mutant, the mutant is retired. If undetected, the test suite  must be revised. 
  Multiple  Condition Coverage.  A test coverage criteria which requires enough test cases such that all possible  combinations of condition outcomes in each decision, and all points of entry,  are invoked at least once.[G.Myers] Contrast with branch coverage, condition  coverage, decision coverage, path coverage, statement  coverage.
  Negative test. A  test whose primary purpose is falsification; that is tests designed to brake the  software
  Orthogonal  array testing:  Technique can be used to reduce the number of combination and provide maximum  coverage with a minimum number of TC.Pay attention to the fact that it is an old  and proven technique. The OAT was introduced for the first time by Plackett and  Burman in 1946 and was implemented by G. Taguchi,  1987
  Orthogonal  array testing:  Mathematical technique to determine which variations of parameters need to be  tested. 
  Oracle.  Test Oracle: a mechanism to produce the predicted outcomes to compare with the  actual outcomes of the software under test  [fromBS7925-1]
  Parallel  Testing  Testing a new or an alternate data processing system with the same source data  that is used in another system. The other system is considered as the standard  of comparison. Syn: parallel run.[ISO]
  Performance  Testing.  Testing conducted to evaluate the compliance of a system or component with  specific performance requirements [BS7925-1]
  Prior  Defect History Testing.  Test cases are created or rerun for every defect found in prior tests of the  system. 
  Qualification  Testing. (IEEE)  Formal testing, usually conducted by the developer for the consumer, to  demonstrate that the software meets its specified requirements. See: acceptance  testing.
  Quality.  The degree to which a program possesses a desired combination of attributes that  enable it to perform its specified end use.
  Quality  Assurance (QA)  Consists of planning, coordinating and other strategic activities associated  with measuring product quality against external requirements and specifications  (process-related activities).
  Quality  Control (QC)  Consists of monitoring, controlling and other tactical activities associated  with the measurement of product quality goals.
  Our  definition of Quality:  Achieving the target (not conformance to requirements as used by many authors)  & minimizing the variability of the system under test  
  Race  condition defect.  Many concurrent defects result from data-race conditions. A data-race condition  may be defined as two accesses to a shared variable, at least one of which is a  write, with no mechanism used by either to prevent simultaneous access. However,  not all race conditions are defects.
  Recovery  testing Testing  how well a system recovers from crashes, hardware failures, or other  catastrophic problems.
  Regression  Testing. Testing  conducted for the purpose of evaluating whether or not a change to the system  (all CM items) has introduced a new failure. Regression testing is often  accomplished through the construction, execution and analysis of product and  system tests.
  Regression  Testing.  - testing that is performed after making a functional improvement or repair to  the program. Its purpose is to determine if the change has regressed other  aspects of the program 
  Reengineering  .The process of examining and altering an existing system to reconstitute it in  a new form. May include reverse engineering (analyzing a system and producing a  representation at a higher level of abstraction, such as design from code),  restructuring (transforming a system from one representation to another at the  same level of abstraction), recommendation (analyzing a system and producing  user and support documentation), forward engineering (using software products  derived from an existing system, together with new requirements, to produce a  new system), and translation (transforming source code from one language to  another or from one version of a language to another).  
  Reference  testing.  A way of deriving expected outcomes by manually validating a set of actual  outcomes. A less rigorous alternative to predicting expected outcomes in advance  of test execution. 
  Reliability  testing.  Verify the probability of failure free operation of a computer program in a  specified environment for a specified time.
  Range  Testing.  For each input identifies the range over which the system behavior should be the  same. 
  Risk  management.  An organized process to identify what can go wrong, to quantify and access  associated risks, and to implement/control the appropriate approach for  preventing or handling each risk identified.
  Robust  test.  A test, that compares a small amount of information, so that unexpected side  effects are less likely to affect whether the test passed or fails. 
  Sanity  Testing  - typically an initial testing effort to determine if a new software version is  performing well enough to accept it for a major testing effort. For example, if  the new software is often crashing systems, bogging down systems to a crawl, or  destroying databases, the software may not be in a 'sane' enough condition to  warrant further testing in its current state.
  Sensitive  test.  A test, that compares a large amount of information, so that it is more likely  to defect unexpected differences between the actual and expected outcomes of the  test. 
  Specification-based  test.  A test, whose inputs are derived from a  specification.
  State-based  testing Testing  with test cases developed by modeling the system under test as a state machine  
  State  Transition Testing.  Technique in which the states of a system are fist identified and then test  cases are written to test the triggers to cause a transition from one condition  to another state. 
  Static  testing.  Source code analysis. Analysis of source code to expose potential  defects.
  Statistical  testing.  A test case design technique in which a model is used of the statistical  distribution of the input to construct representative test cases.  
  Stealth  bug.  A bug that removes information useful for its diagnosis and correction. 
  Storage  test.  Study how memory and space is used by the program, either in resident memory or  on disk. If there are limits of these amounts, storage tests attempt to prove  that the program will exceed them.  
 Stress  / Load / Volume test.  Tests that provide a high degree of activity, either using boundary conditions  as inputs or multiple copies of a program executing in parallel as  examples.
  Structural  Testing.  (1)(IEEE) Testing that takes into account the internal mechanism [structure] of  a system or component. Types include branch testing, path testing, statement  testing. (2) Testing to insure each program statement is made to execute during  testing and that each program statement performs its intended function. Contrast  with functional testing. Syn: white-box testing, glass-box testing, logic driven  testing.
  System  testing  Black-box type testing that is based on overall requirements specifications;  covers all combined parts of a system.
  Table  testing.  Test access, security, and data integrity of table entries. 
  Test  Bed.  An environment containing the hardware, instrumentation, simulators, software  tools, and other support elements needed to conduct a test.
  Test  Case.  A set of test inputs, executions, and expected results developed for a  particular objective.
  Test  Coverage The  degree to which a given test or set of tests addresses all specified test cases  for a given system or component.
  Test  Criteria.  Decision rules used to determine whether software item or software feature  passes or fails a test.
  Test  Documentation.  (IEEE) Documentation describing plans for, or results of, the testing of a  system or component, Types include test case specification, test incident  report, test log, test plan, test procedure, test  report.
  Test  Driver  A software module or application used to invoke a test item and, often, provide  test inputs (data), control and monitor execution. A test driver automates the  execution of test procedures.
  Test  Harness A  system of test drivers and other tools to support test execution (e.g., stubs,  executable test cases, and test drivers). See: test  driver.
  Test  Item.  A software item which is the object of testing.  
  Test  Log A  chronological record of all relevant details about the execution of a  test.
  Test  Plan.  A high-level document that defines a testing project so that it can be properly  measured and controlled. It defines the test strategy and organized elements of  the test life cycle, including resource requirements, project schedule, and test  requirements
  Test  Procedure.  A document, providing detailed instructions for the [manual] execution of one or  more test cases. [BS7925-1] Often called - a manual test  script.
  Test  Status.  The assessment of the result of running tests on  software.
  Test  Stub A  dummy software component or object used (during development and testing) to  simulate the behaviour of a real component. The stub typically provides test  output. 
  Test  Suites A  test suite consists of multiple test cases (procedures and data) that are  combined and often managed by a test harness.
  Test  Tree.  A physical implementation of Test Suite. 
  Testability.  Attributes of software that bear on the effort needed for validating the  modified software 
  Testing.  The execution of tests with the intent of providing that the system and  application under test does or does not perform according to the requirements  specification.
  Unit  Testing.  Testing performed to isolate and expose faults and failures as soon as the  source code is available, regardless of the external interfaces that may be  required. Oftentimes, the detailed design and requirements documents are used as  a basis to compare how and what the unit is able to perform. White and black-box  testing methods are combined during unit testing.
  Usability  testing.  Testing for 'user-friendliness'. Clearly this is subjective, and will depend on  the targeted end-user or customer.
  Validation.  The  comparison between the actual characteristics of something (e.g. a product of a  software project and the expected characteristics).Validation is checking that  you have built the right system.
  Verification  The comparison between the actual characteristics of something (e.g. a product  of a software project) and the specified characteristics.Verification is  checking that we have built the system right.
  Volume  testing.  Testing where the system is subjected to large volumes of  data.
  Walkthrough  In the most usual form of term, a walkthrough is step by step simulation of the  execution of a procedure, as when walking through code line by line, with an  imagined set of inputs. The term has been extended to the review of material  that is not procedural, such as data descriptions, reference manuals,  specifications, etc.
  White  Box Testing (glass-box).  Testing is done under a structural testing strategy and require complete access  to the object¨Ë†s structure ¡ÃŒthat is, the source code.