Here are some similar sounding terms that confuse most people. We need to get more clarity on what the difference between these terms is, so here is an attempt to do that. The Difference Between Quality Assurance, Quality Control, And Testing, explained below.
A large number of people are confused about the difference between quality assurance (QA), quality control (QC), and testing. These terms are closely related, but they are essentially different concepts and we need to understand the difference between them. Their being difference does not minimize the importance of all of them, and since all three are critical to managing the risks of developing and maintaining software, it is important for software managers to understand the differences. The definition of these terms is below:
• Quality Assurance: Defined as a set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
• Quality Control: Defined as a set of activities designed to evaluate a developed work product.
• Testing: The process of executing a system with the intent of finding defects, including all the necessary planning for the testing, and just does not mean the actual execution of test cases.
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - such as whether the requirements are being defined at the proper level of detail. In contrast, QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections. Both QA and QC activities are generally required for successful software development.
There can be disagreements over who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation, but here are some suggestions:
• While line management can and should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective, and his help is in most cases, beneficial.
• There is no right value for the the amount of external QA/QC, more like it should be a function of the project risk and the process maturity of an organization. As organizations mature, management and staff will implement the proper QA and QC approaches as a matter of habit and as a result, the need for external guidance reduces, with review being more relevant.
Tuesday, December 23, 2008
Difference between various software testing terms
Posted by Ashish Agarwal at 12/23/2008 10:57:00 PM 0 comments
Labels: Explanation, Processes, Terms, Testing
Subscribe by Email |
|
Monday, December 22, 2008
Stages of a complete test cycle
People who are involved in the business of software testing know many parts of the testing process, but there are few people who have covered all the stages involved from the time of getting the project requirements, to the last stages of testing. Here is a timeline of the steps involved in this process:
• Requirements Phase: Get the requirements, along with the functional design, the internal design specifications
• Resourcing estimation: Obtain budget and schedule requirements
• Get into details of the project-related personnel and their responsibilities and the reporting requirements
• Work out the required processes (such as release processes, change processes, etc.). Defining such processes can typically take a lot of time.
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
• Test methods: This is the time to plan and determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc., the whole breakup of the types of tests to be done
• Determine test environment requirements (hardware, software, communications, etc.). These are critical to determine because the testing success depends on getting a good approximation of the test environment
• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.). In many cases, a complete coverage of these tools is not done.
• Determine test input data requirements. This can be a fairly intensive task, and needs to be thought through carefully.
• People assignment: This is stage where for the project, task identification, those responsible for tasks, and labor requirements all need to be calculated.
• Find out schedule estimates, timelines, milestones. Absolutely critical, since these determine the overall testing schedule along with resource needs.
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals. A test plan document encapsulates the entire testing proposal and needs to be properly reviewed for completeness.
• Once the test plan is done and accepted, the next step is to write test cases
• Have needed reviews/inspections/approvals of test cases. This may include reviews by the development team as well.
• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases. If a daily build is available, the smoke testing regime for build acceptance needs to be brought in.
• Perform tests. The actual phase where you start to see the results of all the previous efforts.
• Evaluate and report results
• Track problems/bugs and fixes. This phase can take up a substantial portion of the overall project time.
• Retest as needed, including regression testing
• Maintain and update test plans, test cases, test environment, and testware through life cycle
Posted by Ashish Agarwal at 12/22/2008 11:12:00 PM 0 comments
Labels: Cycle, Phases, Testing
Subscribe by Email |
|
Tuesday, December 16, 2008
Types of testing
What are the different types of testing that one normally comes across ? If there are others besides these, please add in the comments.
• Black box testing - This is a testing method that is not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. This is more like testing based on code, and is typically handled by a person who has knowledge of coding.
Black box and White Box testing are the 2 most well know types of testing.
In addition, there are testing carried out at different stages, such as unit, integration and system testing.
• Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. This is testing that happens at the earliest stage, and can be done by either the programmer or by testers (further stages of testing are typically not done by programmers). Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. It could also be used to denote something as basic as testing each field to see whether the field level validations are okay.
• Incremental integration testing - this stage of testing means the continuous testing of an application as and when new functionality is added to the application; the testing requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; this testing is done by programmers or by testers.
• Integration testing - This form of testing implies the testing of the combined parts of an application to determine if they function together correctly. When we say combined parts, this can mean code modules, individual applications, client and server applications on a network, etc. Integration testing can reveal whether parts that seem to be well built by themselves work properly when they are all fitted together. Integration testing should be done by testers.
• Functional testing - Functional testing means testing of Black-box type testing geared to functional requirements of an application; functional testing should be done by testers. Functional testing is geared to validate the work flows that happen in the project.
• System testing - System testing is a black-box type of testing that is based on testing against individual overall requirements specifications; the testing covers all combined parts of a system and is meant to validate the marketing requirements for the project.
• End-to-end testing - End to end testing sounds very similar to system testing just with the name itself, and is similar to system testing. The testing operates at the 'macro' end of the test scale, at the big picture level ; end-to-end testing involves testing of the complete application environment in a situation that simulates the actual real-world use, the final use, (example, interacting with a database, using network communications, or interacting with other dependencies in the system such as hardware, applications, or systems if appropriate).
• Sanity testing - Sanity testing, as it sounds like, is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. This sort of testing could also happen on a regular basis to ensure that regular builds are worth testing. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. Sanity testing is not supposed to be a comprehensive testing.
• Regression testing - Regression testing plays an important part of the bug life cycle. Regression testing involves re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle, but there should never be an attempt to try and minimise the need for regression testing. Automated testing tools can be especially useful for this type of testing.
• Acceptance testing - Acceptance testing, as the name suggests, is the final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. This type of testing can also mean the make or break situation for a project to be accepted.
• Load testing - Again, as the name suggests, load testing means testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. This is part of a system to ensure that even when a system is under heavy load, it will not suddenly collapse, and can help in infrastructure planning.
• Stress testing - Stress testing is a term often used interchangeably with 'load' and 'performance' testing. Stress testing is typically used to describe conducting tests such as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
• Performance testing - Performance testing is a term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Performance testing is also used to determine the time periods involved for certain operations to take place, such as launching of the application, opening of files, etc.
• Usability testing - Usability testing is becoming more critical with a higher focus on usability. Usability testing means testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. This is done ideally through the involvement of specialist usability people.
• Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. Given that the first thing users see is an installer, how the installer works, whether people are able to get clarity, and so on are some of the measurements through installer testing. In addition, the install / uninstall / repair etc should work smoothly.
• Recovery testing - One does not like to anticipate such problems, but given that crashes or other failure can occur, recovery testing measures how well a system recovers from crashes, hardware failures, or other catastrophic problems.
• Security testing - Security testing is getting more important now, with the focus on increased hacking, and security measures to prevent data loss. Security testing determines how well the system protects against unauthorized internal or external access, willful damage, etc and may require sophisticated testing techniques.
• Compatibility testing - Compatibility testing determines how well the software performs in a particular hardware/software/operating system/network/etc environment.
• Exploratory testing - This type of testing is often employed in cases where we need to have a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it, common in situations where the software being developed is of a new type.
• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
• User acceptance testing - determining if software is satisfactory to an end-user or customer. Similar to the acceptance test described above.
• Comparison testing - Comparison testing means comparing software weaknesses and strengths to competing products, very important to evaluate your market, and to determine which are the features you need to develop.
• Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
• Beta testing - Also called pre-release testing, it is the testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. The advantage is that you can test with users, as well as get verification about software compatibility on a wide range of devices.
• Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Posted by Ashish Agarwal at 12/16/2008 06:43:00 AM 0 comments
Labels: Techniques, Terms, Testing, Types
Subscribe by Email |
|
Tuesday, December 9, 2008
Some testing definitions
Some definitions of key testing terms:
What is software 'quality'?
Trying to attain software quality implies being able to meet the following goals: reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. It is not easy to objectively define quality. It will depend on who the 'customer' is and their overall influence in the scheme of things. if you were to take a holistic view of the customers, you would involve the following people: end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
What is the 'software life cycle'?
A software life cycle is one of the most popular terms that a person working in software is expected to know. The life cycle begins when an application is first conceived and ends when it is no longer in use. The various in-between parts of the life cycle is enough to fill a separate book, but as a first level, the terms includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is 'Software Quality Assurance'?
Software QA involves the entire software development paradigm from beginning to end - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention', to ensuring that such processes are defined that make it more difficult to get into problems.
What is 'Software Testing'?
When a software is written by developers, it is a given that there will be sections that will not be working properly. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions, and can cover a wide gamut of activities. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Posted by Ashish Agarwal at 12/09/2008 11:39:00 PM 0 comments
Labels: Explanation, Learn, Terms, Testing
Subscribe by Email |
|
Thursday, December 4, 2008
Testing Strategies/Techniques
Some of the testing strategies that need to be kept in mind.
• Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function
• Data outside of the specified input range should be tested to check the robustness of the program
• Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output
• The number zero should be tested when numerical data is to be input
• Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems
• Crash testing should be performed to see what it takes to bring the system down
• Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance
• Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing.
• Finite state machine models can be used as a guide to design functional tests
Posted by Ashish Agarwal at 12/04/2008 12:44:00 AM 0 comments
Labels: Techniques, Testing
Subscribe by Email |
|