Writing and executing test cases is an expensive task. Whatever test case is written, it should hint towards a different mode of failure. Testing strategies for black box testing are:
- Customer Requirements tests
Requirements are very important for black box testing. Each and every customer requirement should be tested. To do this, every requirement is traced to its test case and every test case to its customer requirement. The first test case is to write a most used success path ( a path that is error condition free) and proceed by planning more success paths. Some failure paths (a path that has errors in it) are also planned. Execution of tests is done so that risky requirements are tested first. It would give more time to fix the errors before the product delivery.
- Equivalence Partitioning
It is a strategy that reduces the number of test cases that are needed to be developed. The input domain is divided into classes. Test cases should be designed so that the inputs lie within these equivalence classes. For each equivalence class, set of data should be treated same by the module under test and should produce same result.
- Boundary Value Analysis
Mistakes generally occur at boundaries of the equivalence classes. Boundary value analysis guides you to create test cases at the boundaries of equivalence classes. Boundary value is defined as a value that relates to a minimum or maximum input.
- Decision Table Testing
To record business rules, decision tables are used. In decision tables, conditions represents input conditions, actions represent events that should trigger. Each column in table is a unique combination of input condition resulting in initiating an action associated with rule. Each rule or column becomes a test case.
- Failure Test Cases
The program or the application should be robust which means that it should respond properly in case or erroneous user input.
Tuesday, May 31, 2011
Writing and executing test cases is an expensive task. Whatever test case is written, it should hint towards a different mode of failure. Testing strategies for black box testing are:
Monday, May 30, 2011
In the previous post (Improve testing skills), I had provided some steps in how a person could be a better tester. In this post, I will continue on this line and provide some more points on how a person could improve their testing skills.
- Being able to explore the boundaries of what has been provided. So, for example, an average tester will test only as per the test cases provided while skilled testers start to feel whether the test cases are adequate or not, when required, they provide updates that add to the test case coverage to ensure that the overall product quality increases. Such efforts are noticed, and in many organizations, there is measurement of the amount of such efforts that are made by individuals.
- Try and feel like the end use customer. A lot of testers start to feel much closer to the Dev teams, to the individual dev who wrote the code that they are testing, and so on. Instead, even while maintaining a relationship with the developer, they need to feel like the end customer; looking at the workflows that the customer uses, seeing things from their view point. Such testing is much more effective in catching defects that would be faced by the end user and lead to a much better impression of the product in the perception of the customers.
- Relationship with the developer and integration with the development phase. During the design and development phase of the product, the tester should be fully involved. The tester can bring their own value added to the product development strategy, and learn a lot more about the reasons why the design has been done in the manner in which it is done. During the development phase, the tester can also learn about which area does the developer feel that more emphasis need to be done, which area was written in a hurry or is more complicated, and so on. Such knowledge gives the tester a much better impression of the product and leads to a much better skill at testing the product.
If you are a good software tester, it shows in your career graph. In most cases, the manager of the software testing team will be able to recognize when a tester prepares extensive test cases and covers as much of the testing field as anybody can. Further, when you take feedback from the development team, they are also able to provide feedback that such a person is indeed the one with whom they would like to work, since such a skilled tester will be able to ensure that their feature is as rock solid as possible (such a tester would have identified as many bugs as possible in their feature, and early).
Anybody can be a good tester, as long as the person is determined to make improvements and consistently follow a number of steps. Do these, improve yourself, and you will find that your popularity, your prestige will go up, and so will the respect that other people have for you.
- When such a tester reports defects, provide as much relevant information as possible. The worst kind of tester is somebody who reports just the exact defect; if you want improvement, report the consistency with which the defect happens; report the steps that lead to the defect, report any input and output parameters. When a developer receives all such information, it helps them to reproduce the defect easily which in turn leads to faster fixing of the defect. If a tester is uncooperative or does not provide all this information, they will find that developers hesitate to work with them.
- Note whether the defect could be because of the system configuration. I have seen this many times. When the tester finds a defect, the better ones are able to estimate whether this is a defect in the functionality, or could have happened because an earlier build was on the system or there was some other similar problem. In such cases, the skilled tester will try the problem on another system and see whether it can be reproduced there as well.
- Overall improving the system. When somebody designs a set of processes, they do it based on some experience as well as how they would like the system to work. When somebody works as per the system, they can always find improvements in how they would like the system to work. A good tester will be appreciated by their manager if they can find improvements in the system and processes. This leads to them developing a reputation for somebody who thinks, which is a very useful reputation to develop.
Thursday, May 26, 2011
The use of automated testing tools depends on the size of the project. For smaller projects, it is not advisable to spend time and personnel on learning the new automated test tool until and unless the tester already knows the automated tool. For larger projects, it is advisable to use automated testing tool.
Approach for automation of functional testing can be data driven or keyword driven. In this, the test drivers are separated from data or actions. Test drivers can be in the form of automated test tools. Data and actions can be maintained through spreadsheets. This approach enables efficiency, control, development and maintenance of automated test cases.
A common automated tool is record/play type tool in which a tester can click through all combinations of menu choices and record them and the application can be retested by playing back option. The disadvantage with this approach is that if there are many changes, recordings would change so much that it becomes very time consuming to manage.
Some other automation tools are code analyzers, coverage analyzers, memory analyzers, load/performance testing tools, web test tools.
Choosing an automated test tool include testing thoroughly, trying more ways for testing which were earlier not feasible, efficiency, reducing tedious manual testing. Few things to keep in mind while choosing an automated testing tool are:
- Points at which current testing is time consuming.
- Points at which current testing is tedious.
- Problems that are again and again missing with the current testing.
- Testing procedures that are carried again and again.
- Testing procedures that should be carried but are not being carried.
- Identify points where testing is not sufficient.
- Identify test tracking and management processes that can be implemented.
The choices of automated testing tool can be narrowed based on the characteristics of the software application. Once the shortlisting of automated tools is done, trial is taken for the final selection. Ensure that the selected testing tool is appropriate and the capabilities and limitations are well understood.
Wednesday, May 25, 2011
There is an expectation that the requirements should be determined early and should remain stable. Some approaches can be followed if these expectations are reasonable:
- Understanding how the requirements might change early in the stage can be done by working with project stakeholders early.
- The application should allow some adaptability to the changes that can occur in the requirments later.
- The code should be well documented and wriiten and commented. It makes notification of changes much easier.
- Rapid prototyping can be used to minimize changes.
- Some extra time should be taken into consideration in the case of changes.
- The new requirements should be moved to different phase while the original requirements should be in the original phase.
- The requiremnts that can be implemented more easily should be incorporated in the project first keeping the difficult requirements for future versions.
- The management and customer should keep in mind the effects that the changes in the requirements can incorporate.
- Automated test scripts should be made flexible.
- Test cases should be designed to be flexible.
- Ad hoc testing should be in focus more instead of detailed test plans and test cases.
- Minimize regression testing methods.
If the application has the functionality that was not in the requirements and if the functionality is not necessary, then it should be removed because it can have some unknown impacts and dependencies in the application which were not taken into account
by designer or customer. The management should be aware of the added risks as a result of unexpected functionality.
Tuesday, May 24, 2011
The best approach to software test estimation depends highly on a particular organization and the project and the experience of the personnel who are involved. Consider two projects of same size and complexity: one is life critical medical equipment and second was low cost computer game. In this, the appropriate test effort for medical equipment software is very large compared to the other one.
Some approaches that can be considered are:
- METRICS BASED APPROACH
This approach focuses on collecting the data for various projects of the organization and then this information can be used for any future test project planning. The expected required test time can be adjusted based on metrics or other information that is available.
- IMPLICIT RISK CONTEXT APPROACH
This approach focuses on using implicitly the risk context by a QA manager or project manager in combination with the past experiences to choose level of resources to allocate to testing. It is an intitutive guess based on experience.
- ITERATIVE APPROACH
This approach focuses on making an initial rough estimate. A refined estimate is made once the testing begins and after a small percentage of first estimate's work is done. The test plans can be refactored and a new estimate can be made. Repeat the cycle as and when necessary.
- TEST WORK BREAKDOWN APPROACH
This approach focuses on beaking the expected testing tasks into smaller tasks for which estimates can be made with reasonable accuracy. One point that has to be kept in mind is that an accurate and predictable breakdown for testing tasks is poosible.
- PERCENTAGE OF DEVELOPMENT APPROACH
This approach focuses on an estimation method for testing based on estimated programming effort.This method depends on project to project variations in risk, personnel, application types, complexity levels.
Wednesday, May 18, 2011
There is an outline that should be followed while writing a test plan. It consists of the following:
- The Background
- The Introduction
- The Assumptions
- The Test Items to be tested.
- The Features to be tested.
- The Features not to be tested.
- The Approach that is to be followed.
- Item Pass/Fail Criteria which is an itemized list of expected output and tolerances.
- The Suspension or Resumption Criteria.
- Test Deliverable which includes beside software, what else would be delivered?
- Testing Tasks which consists of functional and administrative tasks.
- Environmental needs like security clearance, office space and equipment, hardware and software requirements.
- Staffing and Training
- Risks and Contingencies
Test specifications are developed from test plan and are a part of second phase of test development life cycle. How to implement the test cases is explained through test specifications. It consists of following:
- Case Number
- Title of Test
- ProgName which consists of program name containing test.
- Background which consists of Objectives, Assumptions, References, Success Criteria.
- Expected Errors
- Data that flows between the implementation under test and test engine.
Tuesday, May 17, 2011
A bug is defined as a defect or some abnormal behavior of software. Testing plays an important part in the removal of bug. Bug has to travel the whole bug life cycle until it is closed. The cycle includes following stages:
When the bug is posted for first time and not yet approved.
When tester approves that bug is genuine.
Bug is assigned to the developer.
After fixing the bug, it is assigned to testing team to re-test it.
When the bug is changed to deferred state, the bug is expected to be fixed in next releases.
If the developer feels that the bug is not genuine, he can reject the bug.
If bug is repeated twice or two bugs gives the same concept, then one bug is labeled duplicate.
Once the bug is fixed, it is verified that no bug is present and status is changed to verified.
In this stage, the bug traverses the bug cycle once again because the bug still exists.
If the bug is fixed and does not exist, the tester changes the status to closed.
SEVERITY AND PRIORITY OF THE BUG HAS TO FOLLOW GUIDELINES:
- Critical bug prevents further testing of the product under test. No work around is possible for such bugs.
- Major bug is in which defect does not function as expected or cause other functionality to fail.
- Medium or average bug in which defects do not conform to standards and conventions.
- Minor or low bugs do not affect the functionality of the system.
To write bug description, follow these guidelines:
- Be specific.
- Use present tense.
- No unnecessary words.
- No exclamation points.
- Do not use all CAPS.
- Mention steps.
Monday, May 16, 2011
Test cases are written to detect if a feature of an application is working correctly. It is a document which consists of input, action and expected output. After a bug is found, the developers are informed about the bug and are asked to fix it. After it gets fixed, the module is re-tested and it is checked whether it has not created any problem.
- Information of the bug and its severity.
- Bug identifier.
- Bug status.
- Application name.
- The name of the module, function, object etc. where the bug occurred.
- Environment factors.
- Test case name and identifier.
- One line bug description.
- Full bug description.
- If the bug is not covered by test case, steps are described again.
- Names of file used in test.
- Severity level
- Can the bug be reproduced.
- Tester name and test date.
- Name of developer.
- Description of cause of the problem.
- Description of the fix.
- Date of fix.
- Application version that contains the fix.
- Description about tester.
- Date of retest.
- Results of retest.
- Requirements of regression testing.
- Tester who has done regression tests.
- Results of regression testing.
Sometimes, the software is so buggy that it becomes impossible to test it. In order to handle this type of situation, the testers should report whatever bugs they are coming across, focusing more on critical bugs. It depicts deeper problems in the software development process.
Sunday, May 15, 2011
Documentation is very necessary in quality assurance. Everything should be documented. User manuals, test plans, bug reports, business reports, code changes, specifications, design and all other reports should be documented. Any changes in the process should be documented.
A properly documented requirement specification is very necessary. Requirements are the details of what is to be done. Requirements should be clear, complete, detailed, testable. Details should be determined and organized in an efficient way but it can be difficult to handle. Some type of documentation with detailed requirements is very important to properly plan and execute tests.
There are some steps that are needed to develop and run software tests:
- The requirements, design specifications are necessary.
- Budget and cost should be known.
- What people will be responsible, responsibilities, standards and processes should be listed.
- Risk aspects should be determined.
- Test approaches should be defined.
- Test environment should be defined.
- Tasks should be identified.
- Inputs should be determined.
- Test plan document should be prepared.
- Test cases should be written.
- Test environment and test ware should be prepared.
- Tests are performed and results are evaluated.
- Problems are tracked, re-testing is done and test plans are maintained and updated.
Friday, May 13, 2011
Typically, when we talk about software testing, we typically talk about testing plans and test cases. As we get into more detail, we can get into various type of testing strategies, and explore the many testing practices that are sought to be employed during the course of testing. So, there are many testing terms such as Black Box testing, White Box testing, Automation testing, and so on.
However, there is another sort of testing called 'Ad hoc testing' that is totally contrasting to all these strategies and practices. So, what is Ad hoc testing ? Ad hoc testing is a process of testing that does not incorporate the use of test cases or any formal document that lists down software testing processes. Ad hoc testing is a form of Black Box testing, since the tester has no idea of the internals of the application, and even less idea of the API and code structure of the application.
Typically this sort of testing is done by a tester who is already experienced with the software product, and can be done in different cases - It can be done when there is a round of testing that is already done in a systematic way, or it can be done when there is not enough time to do a complete round of formal testing.
Why would we do Ad hoc testing ? Well, many teams go ahead for ad hoc testing when they have already completed their rounds of formal testing, and it has been found that when regular testers also go in for ad hoc testing, they can use their instinct to focus on areas of the application where they feel that things are a bit less secure, and find more bugs in that area. However, when a team goes in for ad hoc testing when they have not had time for regular testing, then they are getting into a high risk zone where there will be bugs in the application.
Ad hoc testing is useful also when an interim release of the software has to be handed over for demos or some other situations where it is not required that the software be of perfect quality, but embarrassing defects should not be there in the application.
Friday, May 6, 2011
A good software test engineer has the following qualities:
- a software engineer should focus on having a high quality product.
- a software engineer should understand customer's needs and requirements.
- a software engineer should maintain a tactful and diplomatic relationship with developers.
- a software engineer should maintain a good relationship with non technical people.
- a software engineer should be able to have a judgment skill when needed.
A good Software Quality Assurance engineer has the following qualities:
- a software quality assurance engineer should focus on having a high quality product.
- a software quality assurance engineer should understand customer's needs and requirements.
- a software quality assurance engineer should maintain a tactful and diplomatic relationship with developers.
- a software quality assurance engineer should maintain a good relationship with non technical people.
- a software quality assurance engineer should be able to have a judgment skill when needed.
- a software quality assurance engineer should have a proper understanding of software development life cycle.
- a software quality assurance engineer should understand business approach and goals.
- a software quality assurance engineer should have good communications skills.
- a software quality assurance engineer should be able to find problem areas.
A good QA/Test Manager has the following properties
- he should have a good understanding of software development life cycle.
- he should have a good and healthy relation with the technical and non technical people.
- he should be able to increase his team's productivity and efficiency.
- he should be able to create enthusiasm about work among team members.
- he should be able to make correct and quick decisions.
- he should be good at handling pressure.
- he should have diplomatic skills.
Thursday, May 5, 2011
Black Box testing is one of the significant methods of doing testing, especially in cases where the tester is not really supposed to know details about the internals of the application (code, API's, etc), and instead is mainly worried about the inputs or outputs. The tester will have a set of valid and invalid inputs, and corresponding outputs based on these inputs. This removed the need for testers who need to know the code (which would typically mean the developers who actually wrote the code need not be involved in the testing process). In addition, Black Box testing tends to replicate the process followed by the end users, and will help in reproducing the problems faced by the end users.
However, there are certain problems that are found during the Black Box testing, and people involved in Black Box testing should be fully aware of some of these challenges:
- Black Box testing almost is never able to cover all the areas of the application, since the number of combinations of input variables can be pretty huge
- There is a cost involved in the development and testing process, whereby the earlier you find the problem, the better it is. So code review is cheaper, unit testing is more expensive, and actual black box testing is more expensive - teams need to ensure that they have factored this as part of their planning
- Testers are dependent on the language of the test cases to ensure that test cases are comprehensive. If the exact requirement stated in the test case is not clear, then there is a chance that future testers could miss some of the input-output cases; it is clear from a number of examples that converting specifications to test cases in terms of languages can lead to errors and missed situations
- In some cases, Black Box testing can never be enough, you need to employ a combination of Black Box testing and White Box testing