Computer systems are an important part of our society. Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability.
A completely different approach is “reliability testing”, where the software is subjected to the same statistical distribution of inputs that is expected in operation.
Reliability testing will tend to uncover earlier those failures that are most likely in actual operation, thus directing efforts at fixing the most important faults.
The fault-finding effectiveness of reliability testing to deliver on its promise of better use of resources, it is necessary for the testing profile to be truly representative of operational use.
Reliability testing is attractive because it offers a basis for reliability assessment.
Reliability testing may be performed at several levels. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.
A key aspect of reliability testing is to define "failure".
Software Reliability Techniques
- Trending reliability tracks the failure data produced by the software system to develop a reliability operational profile of the system over a specified time.
- Predictive reliability assigns probabilities to the operational profile of a software system.
Tuesday, August 31, 2010
Computer systems are an important part of our society. Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability.
Monday, August 30, 2010
Performance Testing finds out the speed and efficiency of the system, computer, product or device. Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload. A software related performance problem can easily get identified through performance testing.
Performance testing demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. It can also measure what parts of the system or workload causes the system to perform badly.
Types of Performance Testing
1.Load Testing : A load test is usually conducted to understand the behavior of the application under a specific expected load. Examples of load testing include:
- Downloading a series of large files from the Internet.
- Running multiple applications on a computer or server simultaneously.
- Assigning many jobs to a printer in a queue.
2.Stress Testing: It determines the load under which a system fails, and how it fails. There are various varieties of Stress Tests, including spike, stepped and gradual ramp-up tests.
3.Volume Testing: It test what happens if huge amounts of data are handled.
4.Soak Testing: The system is runm at high levels of load for prolonged periods of time.
5.Configuration Testing: The process of testing a system with each of the configurations of software and hardware that are supported.
7.Timing testing: It evaluates response times and time to perform an action.
Sunday, August 29, 2010
Smoke testing is a quick-and-dirty test that the major functions of a piece of software work without bothering with finer details.
Smoke testing is used when the product is new. Smoke testing is run when the build is received to check if the build is stable. A smoke test is a series of test cases that are run prior to commencing with full-scale testing of an application. It is a non-exhaustive kind of testing.
A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression.
Characteristics of Smoke Testing
- A smoke test is scripted, either using a written set of tests or an automated test.
- A Smoke test is designed to touch every part of the application in a cursory way.
- It is shallow and wide.
- Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details.
- Smoke Testing is applicable when new components gets added and they are integrated with the existing code.
- It ensures that the build is not broken.
- They are also useful for verifying a build is ready to send to test.
- Smoke tests are not substitute for actual functional testing.
Advantages of Smoke Testing
- The integration problems reduces when smoke testing is done at various stages.
- A properly designed smoke test is capable of detecting problems at an early stage.
- If the major problem is detected at an early stage, it saves time and cost.
Saturday, August 28, 2010
In black box testing, the internals of the system are not taken into consideration. The testers do not have access to the source code. A tester who is doing black box testing generally interacts through a user interface with the system by giving the inputs and examining the outputs.
The advantages of black box testing includes that it is very efficient for large segments of code. It clearly separates user's perspective from developer's perspective. The code access not required. It is very easy to execute.
The disadvantages of black box testing includes limited code path coverage as limited number of inputs can be checked. It cannot control targeting code segments or paths which may be more error prone than others.
There are different kinds of testing that are associated with black box testing :
1. Smoke Testing
2. User Input Testing
3. User Acceptance testing
It includes :
- Alpha testing
- Beta testing
4. System Testing
- Functional testing
- User interface testing
- Usability testing
- Compatibility testing
- Model based testing
- Error exit testing
- User help testing
- Security testing
- Capacity testing
- Performance testing
- Sanity testing
- Regression testing
- Reliability testing
- Recovery testing
- Installation testing
- Maintenance testing
- Accessibility testing
Wednesday, August 25, 2010
Installation testing occurs outside the development environment. Installation testing can be compared as introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
The customer will be happy if your installation is successful on the new system but if the installation fails then the program will not work on that system.
Testing application to make sure that it is working is a crucial step. You should check whether the application doing what it is suppose to do? Secondly, run three-four invocations of this application and check memory, CPU load, etc. Third, are other normal applications, hardware, etc working fine?
Installation testing should take care of the following points: -
- To check if while installing product checks for the dependent software / patches say Service pack3.
- The product should check for the version of the same product on the target machine.
- Installer should give a default installation path say “C:\programs\.”
- Installer should allow user to install at location other then the default installation path.
- Check if the product can be installed “Over the Network”.
- Installation should start automatically when the CD is inserted.
- Installer should give the remove / Repair options.
- When uninstalling, check that all the registry keys, files, Dll, shortcuts, active X components are removed from the system.
- Try to install the software without administrative privileges (login as guest).
- Try installing on different operating system.
- Try installing on system having non-compliant configuration such as less memory / RAM / HDD.
Monday, August 23, 2010
Compatibility testing is performed to ensure compatibility of an application or Web site with different browsers, operating systems, and hardware platforms. Compatibility testing is one of the several types of software testing performed on a system that is built based on certain criteria and which has to perform specific functionality in an already existing setup/environment.
- Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
- The customer has different kinds of platforms, operating systems, browsers, databases, servers, clients, and hardware.
- Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of your product and introduce costly and embarrassing bugs.
Therefore, compatibility testing becomes necessary.
- Software is rigorously tested to ensure compatibility with all the operating systems, software applications and hardware you support.
- Compatibility testing will ensure that the code is stable in both environments and that any information/error messages or user interaction is handled and presented in the same way regardless of the OS.
- Compatibility testing can be easily accommodated in manual tests by having different members of the team work on varying screen resolutions to see what issues arise.
- Compatibility testing should take account of the varying connection speeds available to the user base.
- Compatibility testing may be undertaken as part of Performance Testing and so it is always worth checking out what the non-functional test team on a given project are planning to run in order to see if this is covered.
- Compatibility testing can be used to evaluate the performance of system/application/website on network with varying parameters such as bandwidth, variance in capacity and operating speed of underlying hardware etc.
- Compatibility testing can be used to evaluate the performance of system/application in connection with various systems/peripheral devices.
- Database compatibility testing is used to evaluate an application/system’s performance in connection to the database it will interact with.
Thursday, August 19, 2010
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. There can be two types of testing manual and automated. The automation of software testing is becoming more and more popular but at the same time manual testing cannot be ignored. In this article we shall be mostly focusing on the merits and demerits of automated software testing. As chopping and adding of requirements is nothing new in today’s software business and the testing window is getting smaller there is a realization of a greater need for test automation.
The sad part about automation testing is the expectations that the managers have from these testing models. It’s a general belief that automation can help find more bugs which is not true. The efficiency of the test scripts is solely dependent on the efficiency on the test cases that make the test scripts. People expect that since they have introduced automated testing, they can do away with or at least reduce the manual testing which is a “BIG MISTAKE”. As already mentioned it is the test cases that define the efficiency of the automated test scripts and these test cases are written by these manual testers hence if you think of putting away with them the customers might also think of putting away your product.
Continuing regarding the unrealistic expectations that people have with automation testing, part of the blame for these expectations goes to the fact that when vendors give demos of these products they only tell you what you want to hear and not the reality about the difficulties faced when you try to use the tool for your application. Proper planning is essential when you go for the selection of any automated tool. It is very important that the people who are actually going to use the tool get a hands-on experience of the tool. This might sometimes be impractical because of the tight work schedules and time bound deadlines but if applied it’s a very useful technique for the selection.
Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. This approach can be applied to any application that has a graphical user interface. However if the developers continuously keep changing the GUI even when it’s not called for it indicates towards lack of process in place and hence the automation test will have to be configured again and again.
People have to be educated about the advantages and limitations of automation testing techniques. It is important that pitfalls of automation testing are properly evaluated to avoid inconvenience at a later stage. It should be taken care of that the selected scripts are compatible to your applications. There is no doubt that automation testing is an asset in the armory of the testing team but without proper knowledge and understanding it can even turn out to be a negative catalyst.
Wednesday, August 18, 2010
User Acceptance Testing is often the final step before rolling out the application. Usually the end users who will be using the applications test the application before ‘accepting’ the application.
A formal product evaluation performed by a customer as a condition of purchase. The testing can be based upon the User Requirements Specification to which the system should conform. Use Acceptance testing is black box testing.
USER : System developers cannot do it, as although they are expert in writing software, they are unlikely to know anything about the realities of running the organisation, other then what they have acquired from requirements specifications, and similar documents.
Acceptance : The acceptance of a system means you are confident it will give benefit to the organisation. It does not mean that it only meets the original specification as requested.
Testing : Whenever people are asked what testing is, many of them say it is to prove the system works.
The point of UAT is for business users to try and make a system fail, taking into account the real organisation it will be working in.
User Acceptance testing can be in the form of :
Alpha Testing - Tests are conducted at the development site by the end users. Environment can be controlled a little bit in this case.
Beta Testing - Tests are conducted at customer site and development team do not have any control on the test environment.
Tuesday, August 17, 2010
Often when a bug is detected in a software product all the energy of a developer is concentrated on that particular bug that he sometimes forgets the big picture which results in introduction of many more errors into the program. This is where regression testing comes into picture. Regression testing is any type of software testing that seeks to uncover software errors by partially retesting a modified program. In regression testing systematic selection of appropriate minimum suite of tests needed to adequately cover the affected change is done. Often it is extremely difficult for a programmer to figure out how a change in one part of the software will echo in other parts of the software hence regression testing includes rerunning previously run tests and checking whether previously fixed faults have re-emerged.
For identification of regression the most accepted practice is that once a bug is located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes from the program. Although this can be done through manual testing procedures but often automated testing tools are used for this. Such test suites contain software tools that allow the testing environment to execute all the regression test cases automatically. In some cases these suites are re-run at specific intervals and any failures are reported. Regression testing is an integral any present day software development method. Extensive, repeatable and automated testing of the entire software is done at every stage in the software development cycle. Traditionally in the corporate world, regression testing is performed by the software quality assurance team after development team has completed work. As a consequence of introduction of new bugs, program maintenance requires far more system testing per statement written than any other programming. Theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way. In practice, such regression testing is very costly.
The biggest challenge faced by this kind of regression testing is that these tend to be very fragile. Fragile in the sense that even a trivial change in the application often causes the tests to report “failures” that actually indicate that the script needs to be updated to deal with the change in the application. This causes inconvenience because it often takes more time and effort to maintain these automated regression tests that it would have taken to just execute them manually.
Concluding the discussion it should be mentioned the programmer should be aware when it is useful to use regression testing and when it’s a waste of resources. If your testing mission is to unleash as many defects as possible then maybe regression testing is not a good choice but when the purpose is to demonstrate the ruggedness of some specific features of the product on a relatively stable and mature application then automated testing is the right choice for you.
Monday, August 16, 2010
Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Development errors occur in the form of syntax and logic errors can be prevented. A run-time error takes place during the execution of a program, and usually happens because of adverse system parameters or invalid input data.
The objectives of error handling testing are :
- Application system recognizes all expected error conditions.
- Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected.
- During correction process reasonable control is maintained over errors.
How to use
- A group of knowledgeable people are required to anticipate what can go wrong in the application system.
- All the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking is needed.
- Logical test error conditions should be created based on this assimilated information.
Error handling testing can be used throughout the SDLC. The impact that the errors produce should be judged and steps should be taken to reduce them to an acceptable level. It is used to assist in error handling management process.
Saturday, August 14, 2010
Volume testing belongs to the group of non-functional tests. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system.
- Volume testing refers to testing a software application for a certain data volume.
- Volume testing can (and should) be used in component testing.
- Volume testing will also be undertaken (normally) as part of the User Acceptance test.
- Volume testing is used to find faults and give credible information about the state of the component, on which business decisions can be taken.
- Volume testing might take place to confirm that the central core architecture is the one to proceed with.
- Developers across customers & finish users can do. The tests can be outsourced to a testing laboratory that specializes in performance testing.
- Online system: Input fast, but not necessarily fastest possible, from different input channels.
- Database system: The database should be very large. Every object occurs with maximum number of instances.
- File exchange: Especially long files. Maximal lengths. Lengths longer than typical
maximum values in communication protocols.
- Disk space: Try to fill disk space everywhere there are disks.
Volume test is interesting if
• The volume test is not executed before.
• Does the system always has the necessary memory resources?
• Can we guarantee this if several systems share the hardware?
• Is it guaranteed that no larger data volumes than specified will occur?
• Is there a low risk if data volume turn greater than specified anyway but the
system does not work well enough then?
Friday, August 13, 2010
Localization is a process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. Localization testing checks the quality of a product's localization for a particular target culture/locale.
Areas of focus in Localization testing
- Localization testing should focus on the areas affected by localization, such as UI and content, culture/locale-specific, language-specific, and region-specific areas.
- The localized checklist includes spelling rules, sorting rules, upper and lower case conversions, printers, size of papers, operating system, key boards, text filters, hot keys, mouse, date formats, measurements and rulers, available memory.
- Localization testing should include basic functionality tests, setup and upgrade tests run in the localized environment and plan application and hardware compatibility tests according to the product's target region.
- Availability of drivers for local hardware.
- Encryption algorithms.
- Focus on customization that could not be automated through the globalization services infrastructure.
- The quality is not usually checked during localization testing.
- Validation and verification of applications, linguistic accuracy, and consistency checks can be purchased separately.
- Localized testing can weed out potential errors, before they are localized and correcting them becomes costly.
- The advantage of localized testing is that it saves time.
Thursday, August 12, 2010
Localization is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. Translating and adapting both the content (text and style) and the presentation (graphical and technical components) of an EXISTING product according to the language and cultural characteristics of the target audience or region for which it is intended.
- Localization testing are not strictly a part of the development of world-ready software.
- Localization becomes possible once you have developed world-ready software.
- The end result of localization is a product that is appropriate for the target locale's business and cultural conventions, appears custom built for the end user's cultural and linguistic background does not change the original intended meaning.
- Users can interact with a successfully localized product in their own language and in a setting that feels natural to them.
Software localization is the process of adapting a software product to the linguistic, cultural and technical requirements of a target market. Software localization is the translation and adaptation of a software or web product, including the software itself and all related product documentation.
- This process often requires a lot of work hours and tremendous effort from the development teams.
- There are a number of tools that were specifically created in order to simplify the localization process.
Software Localization Process
- Identification of what must be translated from a software.
- Adopting a localization strategy.
- Schedule for localization process should be established and deadlines should be created.
- Recruiting adequate, professional translators.
- Ensure the accuracy and coherence of translator's work.
- Consult the development team.
- Define a properly internationalized product that won't need to undergo changes for each of the envisaged foreign languages.
- Testing the product for each and every one of the languages in question.
Wednesday, August 11, 2010
Software development is the development of a software product in a planned and structured process. This software can serve numerous purposes such being used to meet the specific needs of a client/business, to meet a perceived need of some set of potential users or for personal use. The term software development is often used to refer the activity of writing the computer code whereas in a broad sense it includes all that is involved between the conceptions of the desired software through the final manifestation of the software.
The first and foremost part of a software system is the reason for its existence. It has to be kept in mind that the reason for the existence of a software product is that it provides value to the user. Before even conceptualizing the product it has to be ensured that it should add value to the system.
Software design is not a haphazard process but a simple and systematic approach towards the final product. All design should be as simple as possible but no simpler. Simple designs do not necessarily mean quick and dirty. The designs are developed in a step by step process which takes a lot of thought work and numerous iterations for simplification. An important aspect of a successful software project is presence of clear vision. Without conceptual integrity the software would end up with two or three heads none of them having the virtue to capture client’s satisfaction. Empowering the System Architect lets him uphold the vision and enforce compliance. It has to be kept in mind that whatever you are producing will be used, maintained or documented by others. People will depend on being able to understand your system, so always specify, design and implement keeping in mind that someone else will have to understand what has been done by you. Someone else might have to debug your code and that makes their job easier in turn adding value to your system.
The typical shelf life of a software product has been on a declining trend in recent past. In today’s conditions where specifications change in less than a month’s notice and hardware platforms are outdated in just a few months, software lifetimes are measured in months instead of years. Software products have to be designed to meet these challenges from the start. Always be prepared for the worst case scenario. Reusability saves a lot of time and effort. Attaining high level of reusability is probably the hardest goal to accomplish in developing a software system. Reusability of codes and designs is considered to be major a major benefit of using object-oriented technologies. However it is necessary to convey the opportunities of reusing to others in the organization. Planning ahead for reuse reduces the cost and increases the value of both the reusable components and the systems into which they are incorporated.
The last but definitely not the least principle for software development is applying thought process before taking any action. It sometimes might not help you get the right result but it will help you in doing it the right way the next time around. Applying all the above stated principles require intense thought for which rewards are monstrous.
Tuesday, August 10, 2010
Static Testing is also called verification. It is also called dry run testing. In this the software is not actually used. It mainly checks the sanity of the code, algorithm or the document. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
Static Code Analysis is a method of detecting errors in program code based on the programmer's reviewing the code marked by the analyzer in those places where potential errors may occur.
The main advantage of static code analyzers use lies in the possibility of considerable cost saving of defects elimination in a program. The earlier an error is determined, the lower is the cost of its correction. Static analysis tools allow to detect a large number of errors at the construction stage.
Formal methods is the term applied to the analysis of software (and hardware) whose results are obtained purely through the use of rigorous mathematical methods.
Implementation techniques of formal static analysis include:
- Model checking.
- Data-flow analysis.
- Abstract interpretation models.
- Use of assertions in program code.
Monday, August 9, 2010
The objective of control testing is to get an accurate and complete data. The transactions that are carried out are authorized. It helps in the maintenance of adequate audit trail of information. Control testing is an efficient, effective and economical process. It ensures that the process meeting the needs of the user.
Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.
To use control testing, risks must be identified. Negative approach should be followed by the testers which means that they should be able to judge that what could go wrong with the system. To use control testing, risk matrix should be developed which identifies the risks, controls; segment within application system in which control resides.
Application systems are frequently interconnected to other application system. The interconnection may be data coming from another application system, leaving for another application system or both. Frequently multiple systems (applications) sometimes called cycles or functions are involved.
The objective is to determine proper parameters and data are correctly passed between the applications, documentation is correct and accurate and to ensure proper timing and coordination of functions exists between the application system.
Use Inter-system testing when there is a change in parameters in application. Inter-system parameters would be checked and verified after the change or new application is placed in the production.
Most of the software requirement specifications given today have functional requirements. Functional requirements are the descriptions of the behaviors expected from the software in the given conditions. Functional requirements may include calculations, technical details, data manipulation and processing and other specific functionality that define what a system is supposed to achieve. It is pretty easy to deduce why emphasis is given to these specifications but nonfunctional software requirements are also equally important. The functional requirements are supported by the nonfunctional requirements which impose constraints on the design or implementation. Nonfunctional requirements specify overall characteristics such as cost and reliability.
Nonfunctional requirements are sometimes interchangeably called as quality requirements. These quality requirements add quality attributes, quality factors and quality of the service in the software requirements. Some of the nonfunctional requirements include usability, portability, reliability, availability, efficiency, Integrity, security, safety, robustness and performance. These include both the internal and external quality aspects which are important to users and developers respectively.
The most difficult part of the nonfunctional requirements is specifying the quality attributes but once specified they provide an edge from the other available products. The nonfunctional requirements turn an ordinary product that serves the purpose to a delightful product. One cannot expect customers to like the product unless the quality attributes are explicitly specified. It is important to outline these specifications because sometimes these specifications propel significant architectural and design decisions and there would be a lot of wastage of resources in case re-designing is required.
A low point of these attributes is that they have a seesaw effect. The most prominent conflict is between the efficiency and most other attributes. For example addition of more layers of security cause a decline in the efficiency and similarly improvement in efficiency might affect the maintainability features. Hence it is extremely critical that user and developers decide on which attributes are more important for the product.
While giving the quality requirements the most common mistake that is made is giving abstract specifications which are in relative terms and are not verifiable. If a specification is given that the product should be user friendly then there is no way to enforce quality on the user friendliness of the software unless explicitly specified. One of the specifications that can be given is that the availability standard at the minimum should be 90% and the desired availability standard is 95%. Although it is more tempting and less time consuming to demand 24 hours availability but its serves as the best recipe to make all the project stakeholders to move towards a common objective.
Saturday, August 7, 2010
Black Box Testing is testing without knowledge of the internal workings of the item being tested. There are four black box testing methods:
Graph-based Testing Methods
Basic idea: A "cause" is an input condition, and an "effect" is a specific sequence of computations to be performed. A cause-effect graph is basically a directed graph that describes the logical combinations of causes and their relationship to the effects to be produced.
- Black-box methods based on the nature of the relationships (links) among the program objects (nodes), test cases are designed to traverse the entire graph.
- A graph is created between the objects and the relationships.
- From the graph, each object relationship is identified and test cases written accordingly to discover the errors.
This method divides the input domain of a program into classes of data from which test cases can be derived. It reduces the number of test cases. The guidelines of Equivalence Partitioning are :
- If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
- If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
- If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
- If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Friday, August 6, 2010
Black Box Testing is testing without knowledge of the internal workings of the item being tested. There are four black box testing methods. Graph based testing method and Equivalence Partitioning have already been discussed.
Boundary Value Analysis
Boundary Value Analysis determines the effectiveness of test cases for a given scenario. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. Boundary value analysis complements and can be used in conjunction with equivalence partitioning.
For boundary value analysis, the following guidelines should be used:
- For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
- If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
Advantages of Boundary Value Analysis
- Robustness Testing – Boundary Value Analysis plus values that go beyond the limits.
- Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1.
- Forces attention to exception handling.
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. These testing types are again divided in two groups:
Testing in which user plays a role of tester.
- Functional Testing : The testing of the software is done against the functional requirements.
- Load testing : It is the process of subjecting a computer, peripheral, server, network or application to a work level approaching the limits of its specifications.
- Stress Testing : The process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions.
- Ad-hoc testing : Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.
- Smoke Testing : It is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.
- Recovery Testing : Testing aimed at verifying the system's ability to recover from varying degrees of failure.
- Volume Testing : Huge amount of data is processed through the application in order to check the extreme limitations of the system.
- Usability Testing : This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.
User is not required.
- Alpha Testing : Testing of a software product or system conducted at the developer's site by the end user.
- Beta Testing : The pre-testing of hardware or software products with selected typical customers to discover inadequate features or possible product enhancements before it is released to the general public. Testing of a rerelease of a software product conducted by customers.
- User Acceptance Testing : The end users who will be using the applications test the application before ‘accepting’ the application. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.
Wednesday, August 4, 2010
Black-box testing alludes to tests that are conducted at the software interface. Black-box tests are used to demonstrate that input is properly accepted and output is correctly produced, and that the integrity of external information is maintained. There is no knowledge of the test object's internal structure. This method of test design is applicable to all levels of software testing: unit, integration, functional testing, system and acceptance. Black box testing tends to be applied during later stages of testing.
Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing.
To implement black box testing strategy, the tester should go through the requirements specifications and a user should know how the system should behave in response to the particular action.
Black-box testing attempts to find errors in the categories like incorrect or missing functions, interface errors, errors in data structures or external database access, behavior or performance errors, and initialization and termination errors.
Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult.
- Chances of having unidentified paths during this testing.
Tuesday, August 3, 2010
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.
- Static testing is about prevention.
- In static testing, the software is not actually used.
- It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code.
- This testing is used by the developer who writes the code in isolation.
- Out of Verification and Validation, it is the verification portion.
- Static testing methodologies include code reviews, inspection and walkthroughs.
- Dynamic testing is about cure.
- The source code is actually compiled and run.
- It examines the physical response from the system.
- Out of Verification and Validation, it is the validation portion.
- Dynamic testing methodologies include unit testing, integration testing, system testing and acceptance testing.
- Static testing is many times more cost-effective than dynamic testing.
- Static testing achieves 100 statement coverage in a relatively short time while dynamic testing often often achieves less than 50 statement coverage.
- Static testing can be done before compilation while dynamic testing can take place only after compilation and linking.
- Static check is more profitable than the dynamics of the static check because tests are made at the initial stage.
Why is static testing more effective?
Static testing gives you comprehensive diagnostics for your code. It warns you about:
- syntax errors.
- code that will be hard to maintain.
- code that will be hard to test.
- code that does not conform to your coding standards.
- non-portable usage.
- ANSI violations.
Monday, August 2, 2010
Code coverage analysis is the process of finding areas of a program not exercised by a set of test cases, creating additional test cases to increase coverage, and determining a quantitative measure of code coverage, which is an indirect measure of quality.
A Decision is a program point at which the control flow has two or more alternative routes. A decision coverage is the percentage if the decision outcomes that have been exercised by a test suite.
Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application. It is a better practice as compared to statement coverage. It goes deeper into the code as compared to statement coverage.
It states whether the the boolean expressions are tested in control structures. It ensures for having adequate number of test cases for every program to ensure execution of every decision or branch at least once.
What are the advantages of branch coverage?
- To validate that all the branches in the code are reached.
- To ensure that no branches lead to any abnormality of the program’s operation.
- It eliminate problems that occur with statement coverage testing.
What are disadvantages of branch coverage?
- There may be other condition that can be used for decision making.
- This metric ignores branches within boolean expressions which occur due to short-circuit operators.
Sunday, August 1, 2010
The purpose of white box testing is to make sure that functionality is proper and the information on the code coverage. It tests the internal structure of the software. It is also known as structural testing, glass testing and clear box testing.
Statement coverage is the most basic form of code coverage. A statement is covered if it is executed. Note that a statement does not necessarily correspond to a line of code. Multiple statements on a single line can confuse issues - the reporting if nothing else.
- In this type of testing the code is executed in such a manner that every statement of the application is executed at least once.
- It helps in assuring that all the statements execute without any side effect.
- Statement coverage criteria call for having adequate number of test cases for the program to ensure execution of every statement at least once.
- In spite of achieving 100% statement coverage, there is every likelihood of having many undetected bugs.
- A coverage report indicating 100% statement coverage will mislead the manager to feel happy with a false temptation of terminating further testing which can lead to release a defective code into mass production.
- We can not view 100% statement coverage sufficient to build a reasonable amount of confidence on the perfect behavior of the application.
- Since 100% statement coverage tends to become expensive, the developers chose a better testing technique called branch coverage.