Data flow testing Testing in which test cases are designed based on variable usage within the code.
Database testing. Check the integrity of database field values.
Defect The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system
Defect Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.
Depth test. A test case, that exercises some part of a system to a significant level of detail.
Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.
Dirty testing Negative testing.
Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs
End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.
Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.
Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.
Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.
Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]
Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.
Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.
Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.
Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.
Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.
Free Form Testing. Ad hoc or brainstorming using intuition to define test cases.
Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.
Gray box testing Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.
Gray box testing Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior.
Gray box testing Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:
- A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
- The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
High-level tests. These tests involve testing whole, complete products
Inspection A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
Integration The process of combining software components or hardware components or both into overall system.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-band approach where all source modules are combined and tested in one step. The big-band approach to integration should be discouraged.
Interface Tests Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.
Latent bug A bug that has been dormant (unobserved) in two or more releases.
Lateral testing. A test design technique based on lateral thinking principals, to identify faults.
Load testing Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Load/stress test. A test is design to determine how heavy a load the application can handle.
Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.
Load-isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.
No comments:
Post a Comment