User Acceptance Testing occurs just before the software is released to the customer. The end-users along with the developers perform the User Acceptance Testing with a certain set of test cases and typical scenarios.
Installation testing is often the most under tested area in testing. this type of testing is performed to ensure that all installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed. Installation testing should take care of the following points:
- To check if while installing product checks for the dependent software/patches.
- The product should check for the version of the same product on the target machine, say the previous should not be over installed on the newer version.
- Installer should give a default installation path.
- Installation should allow user to install at location other than the default installation path.
- Check if the product can be installed "Over the Network".
- Installation should start automatically when the CD is inserted.
- Installer should give the Remove/Repair options.
- When uninstalling, check that all the registry keys, files, DLL, shortcuts, activeX components are removed from the system.
- Try to install the software without administrative privileges.
- Try installing on different operating system.
- Try installing on system having non-compliant configuration such as less memory/RAM/HDD.
Saturday, October 30, 2010
Validation phase - User Acceptance Testing and Installation Testing
Posted by Sunflower at 10/30/2010 03:19:00 PM 0 comments
Labels: Customer, Developers, Features, Installation, Installation testing, Phases, Product, Quality, Software testing, Test cases, User Acceptance testing, Users, Validation, Validation Phase
Subscribe by Email |
|
Friday, October 29, 2010
Software Testing - Validation Phase - Alpha Testing
A software prototype stage when the software is first available for run. The software has the core functionalities in it but complete functionality is not aimed at. It would be able to accept inputs and give outputs. Usually, the most used functionalities are developed more. This test is conducted at the developer's site only. In a software development life cycle, depending on the functionalities, the number of alpha phases required are laid down in the project plan itself.
During this, the testing is not a through one since only the prototype of the software is available. The basic installation and un-installation tests and the completed core functionalities are tested.
The aim of alpha testing is :
- to identify any serious errors.
- to judge if the intended functionalities are implemented.
- to provide to the customer the feel of the software.
A thorough understanding of the product is done now. During this phase, the test plan and test cases for the beta phase which is the next stage is created. The errors reported are documented internally for the testers and developers reference. No issues are usually reported and recorded in any of the defect management or bug trackers.
The role of the test lead is to understand the system requirements completely and to initiate the preparation of test plan for the beta phase. The role of the tester is to provide input while there is still time to make significant changes as the design evolves and to report errors to developers.
Posted by Sunflower at 10/29/2010 10:42:00 AM 0 comments
Labels: Aim, Alpha, Alpha testing, Applications, Defects, Errors, Functionality, Goals, Inputs, Outputs, Phases, Quality, Software, Software testing, Validation, Validation Phase
Subscribe by Email |
|
Thursday, October 28, 2010
Validation Phase - System Testing - Regression Testing
The process of regression testing is simple i.e the test cases that have been prepared can be used and the expected results are also known. If the process is not automated it can be very time consuming and tedious operation. Some of the tools that are available for regression testing are:
- Record and Playback tools - In this, the previously executed scripts can be re-run to verify whether the same set of results are obtained. e.g. Rational Robot.
The end goals of regression testing are :
- to ensure that the unchanged system segments function properly.
- to ensure that the previously prepared manual procedures remain correct after the changes have been made to the application system.
- to verify that the data dictionary of data elements that have been changed is correct.
Most of the time the testing team is asked to check the last minute changes in the code just before making a release to the client, in this situation the testing team needs to check only the affected areas.
In short, regression testing should get the input from the development team about the nature and amount of change in the fix so that the testing team can first check the fix and then the affected areas.
Regression testing is the testing in which maximum automation can be done. The reason being the same set of test cases will be run on different builds multiple times. But then, the extent of automation depends on whether the test cases will remain applicable over the time. In case, the automated test cases do not remain applicable for some amount of time then test engineers will end up in wasting time to automate and don not get enough out of automation.
Posted by Sunflower at 10/28/2010 12:40:00 PM 0 comments
Labels: Application, Defects, Errors, Goals, Phases, Regression, Regression Testing, Results, Software testing, System, System Testing, Test cases, Tests, Validate, Validation, Validation Phase
Subscribe by Email |
|
Wednesday, October 27, 2010
Validation Phase - System Testing - Regression Testing
Regression testing is used to test or check the effect of changes that are made in the code. Most of the time the testing team is asked to check last minute changes in the code just before making a release to the client. In this situation, the testing team needs to check only the affected areas. In short, for regression testing, the testing team should get the input from the development team about the nature or amount of change in the fix so that the testing team can first check the fix and then the side effects of the fix.
Regression testing is the testing in which maximum automation can be done. The reason being the same set of test cases will be run on different builds multiple times. But, the extent of automation depends on whether the test cases will remain applicable for some amount of time, test engineers will end up in wasting time to automate with minimal results.
Regression testing is re-testing unchanged segments of application. It involves re-running tests that have been previously executed to ensure that the same results can be achieved currently as were achieved when the segment was last tested. The selective re-testing of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of newly added features have not created problems with previous versions of the software.
Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a kind of quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
During regression testing, following activities are performed:
- Re-running of previously conducted tests.
- Reviewing previously prepared manual procedures.
- Comparing the current test results with the previously executed test results.
Posted by Sunflower at 10/27/2010 11:37:00 AM 0 comments
Labels: Automate, Changes, Code, Defects, Effective, Errors, Modifications, Phases, Quality, Regression, Regression Testing, System Testing, Validation, Validation Phase
Subscribe by Email |
|
Tuesday, October 26, 2010
Validation Phase - System Testing - Content Management Testing
Content Management has gained a predominant importance after the web applications took a major part of our lives. As the name denotes, content management is managing the content. Content Management Testing involves :
- Testing the distribution of the content.
- Request, Response Time.
- Content display on various browsers and operating systems.
- Load distribution on the servers.
In fact all the performance related testing should be performed for each version of the web application which uses the content management servers.
Example: You want to open the Yahoo! in Chinese version. When you choose Chinese version on the main page of yahoo! to get to see the entire content in chinese. Yahoo! would strategically plan and have various servers for various languages. When you choose a particular version of the page, the request is re-directed to the server which manages the Chinese content page. The Content Management System helps in placing content for various purposes and also help in displaying when the request comes in.
Posted by Sunflower at 10/26/2010 02:45:00 PM 0 comments
Labels: Applications, Content Management System, Content Management Testing, Contents, Phases, Quality, Software testing, Validation, Validation Phase, Web Applications
Subscribe by Email |
|
Monday, October 25, 2010
Validation phase - System Testing - Performance Testing - Capacity Planning - Stress Testing
Stress testing is another term that is used for performance testing. Though, load and stress testing are used synonymously for performance related efforts, their goal is different.
Unlike load testing, where testing is conducted for specified number of users, stress testing is conducted for the number of concurrent users beyond the specified limit. The objective is to identify the maximum number of users the system can handle before breaking down or degrading drastically. Since the aim is to put more stress on system. Think time of the user is ignored and the system is exposed to excess load.
The goals of stress testing are:
- It is the testing beyond the anticipated user base.
- It identifies the maximum load a system can handle.
- It checks whether the system degrades gracefully or crashes at a shot.
Stress testing also determines the behavior of the system as user base increases. Let us take an example of online shopping application to illustrate the objective of stress testing. It determines the maximum number of concurrent users an online system can service which can be beyond 1000 users. However there is a possibility that the maximum load that can be handled by the system may found to be same as anticipated limit.
The inference drawn from stress testing are:
- Whether the system is available or not?
- If yes, is the available system stable?
- If yes, is it moving towards unstable state?
- When is the system going to break down or degrade drastically?
Posted by Sunflower at 10/25/2010 11:52:00 AM 0 comments
Labels: Goals, Inference, Objectives, Performance, Performance testing, Phases, Stress testing, System Testing, Users, Validation, Validation Phase
Subscribe by Email |
|
Sunday, October 24, 2010
Validation phase - System Testing - Performance Testing - Capacity Planning - Load testing
Load testing is much used industry term for the effort of performance testing. Here, load means the number of users or the traffic for the system. Load testing is defined as the testing to determine whether the system is capable of handling anticipated number of users or not.
In load testing, the virtual users are simulated to exhibit the real user behavior as much as possible. Even the user think time such as how users will take time to think before inputting data will also be emulated. It is carried out to justify whether the system is performing well for the specified limit of load.
Goals of Load Testing:
- Testing for anticipated user base.
- Validates whether the system is capable of handling load under specified limit.
The objective of load testing is to check whether the system can perform well for specified load. The system may be capable of accommodating more than say 1000 concurrent users. But, validating that is not under the scope of load testing. No attempt is made to determine how many more concurrent users the system is capable of servicing.
The inference drawn from load testing is :
- Whether the system is available?
- If yes, is the available system stable?
Posted by Sunflower at 10/24/2010 04:40:00 PM 0 comments
Labels: Applications, Effort, Goals, Inference, Load Testing, Objectives, Performance, Performance testing, Phases, System Testing, Users, Validate, Validation, Validation Phase
Subscribe by Email |
|
Saturday, October 23, 2010
Validation phase - System Testing - Performance Testing - Capacity Planning and Bug Fixing
Most traditional applications are designed to respond to a single user at any time, most web applications are expected to support a wide range of concurrent users. As a result, performance testing has become a critical component in the process of deploying a web application. Performance testing has proven to be most useful in the capacity planning area.
- Capacity Planning
Capacity planning is about being prepared. Suppose if you want to know if your server configuration is sufficient to support two million visitors per day with average response time of less than five seconds etc. In capacity planning, you need to set the hardware and software requirements of your application so that you will have sufficient capacity to meet anticipated and unanticipated user load. One approach in capacity planning is to load-test your application in a testing server farm. By simulating different load levels on the farm using a web application performance testing tool such as WAS, you can collect and analyze the test results to better understand the performance characteristics of the application. Also, you may want to test the scalability of the application with different hardware configurations. You should load test your application with different numbers of clustered servers to confirm that the application scales well in cluster environment.
- Bug Fixing
There are some errors that may not occur until the applications is under high user load. Performance testing helps to detect and fix such problems before launching the application. It is therefore recommended that developers take an active role in performance testing their applications, especially at different major milestones of the development cycle.
Posted by Sunflower at 10/23/2010 04:08:00 PM 0 comments
Labels: Applications, Bug Fixing, Bugs, Capacity Planning, Errors, Performance, Performance testing, Phases, Quality, Software, System Testing, Validation, Validation Phase
Subscribe by Email |
|
Friday, October 22, 2010
Validation phase - System Testing - Performance Testing - Utilization
Utilization refers to the usage level of different system resources, such as the server's CPU, memory, network bandwidth and do forth. It is usually measured as a percentage of the maximum available level of the specific resources.
Utilization usually increases proportionally to increasing user load. However, it will top off and remain at a constant when the load continues to build up. If the specific system resource tops off at 100 percent utilization, it is very likely that this resource has become the performance bottleneck of the site. Upgrading the resource with higher capacity would allow greater throughput and lower latency thus better performance. If the measured resource does not top off close to 100 percent utilization, it is probably because one or more of the other system resources have already reached their maximum usage levels. They have become the performance bottleneck of the site.
To locate the bottleneck, there is a need to go through a long and painstaking process of running performance tests against each of the suspected resources, and then verifying if performance is improved by increasing the capacity of the resources. In many cases, performance of the site will start deteriorating to an unacceptable level well before the major system resources, such as CPU and memory, are maximized.
Posted by Sunflower at 10/22/2010 07:35:00 PM 0 comments
Labels: Application, Bottlenecks, Load, Performance, Performance testing, Phases, Resources, Software, System Testing, Utilization, Validation, Validation Phase, Website
Subscribe by Email |
|
Thursday, October 21, 2010
Validation phase - System Testing - Performance Testing - Throughput
Throughput refers to the number of client requests processed within a certain unit of time. Typically, the unit of measurement is requests per second or pages per second. From a marketing perspective, throughput may also be measured in terms of visitors per day or page views per day, although smaller time units are more useful for performance testing because applications typically see peak loads of several times the average load in a day.
The throughput of the web site is often measured and analyzed at different stages of the design, develop, and deploy cycle. For example, in the process of capacity planning, throughput is one of the key parameters for determining the hardware and system requirements of a web site. Throughput also plays an important role in identifying performance bottlenecks and improving application and system performance. Whether a web farm uses a single server or multiple servers, throughput statistics show similar characteristics in reactions to various user load levels.
The throughput of a typical web site increases proportionally at the initial stages of increasing load. However, due to the limited system resources, throughput cannot be increased indefinitely. It will eventually reach a peak, and the overall performance of the site will start degrading with increased load. Maximum throughput is the maximum number of user requests that can be supported concurrently by the site in the given unit of time. The value of maximum throughput varies from site to site. It mainly depends on the complexity of the application. As with any statistic, throughput metrics can be manipulated by selectively ignoring some of the data.
In many ways, throughput and response time are related, as different approaches to thinking about the same problem. In general, sites with high latency will have low throughput. If you want to improve your throughput, you should analyze the same criteria as you would to reduce latency. Also, measurement of throughput without consideration of latency is misleading because latency often rises under load before throughput peaks. This means that peak throughput may occur at a latency that is unacceptable from an application usability standalone.
Posted by Sunflower at 10/21/2010 11:15:00 AM 0 comments
Labels: Application, Approaches, Measure, Performance, Performance testing, Phases, Response time, Software, System Testing, Throughput, Validation, Validation Phase
Subscribe by Email |
|
Wednesday, October 20, 2010
Validation phase - System Testing - Performance Testing - Response time
Response time is the delay experienced when the request is made to the server and the server's response to the client is received. It is usually measured in units of time, such as seconds or milliseconds. Generally speaking, response time increases as the inverse of un-utilized capacity. It increases slowly at low levels of user load but increases rapidly as capacity is utilized. The sudden increase in response time is often caused by the maximum utilization of one or more system resources. Any time spent in a queue naturally adds extra wait time to the overall response time.
Response time in a typical web farm, response time can be divided into many segments and categorize these segments into two major types: network response time and application response time. Network response time refers to the time it takes for data to travel from one server to another. Application response time is the time required for data to be processed within a server.
Total Response Time = (N1+N2+N3+N4) + (A1+A2+A3)
where Nx represents the network response time and Ax represents the application response time.
In general, the response time is mainly constrained by N1 and N4. This response time represents the method your clients are using to access the Internet. To reduce these network response time, one common solution is to move the servers and/or web contents closer to the clients. This can be achieved by hosting your farm of servers or replicating your web contents with major Internet hosting providers who have high speed connections to major public and private Internet exchange points.
Reducing application response time is an art form unto itself because the complexity of server applications can make analyzing performance data and performance tuning quite challenging. Typically, multiple software components interact on the server to service a given request. Response time can be introduced by any of the components. This problem can be approached by:
- The application design should minimize round trips wherever possible.
- Optimize server components to improve performance for your configuration.
- Look for connection among threads or components competing for common resources.
- Finally, to increase capacity, you amy want to upgrade the server hardware.
Posted by Sunflower at 10/20/2010 01:24:00 PM 0 comments
Labels: Application, Design, Network, Performance, Performance testing, Phases, Quality, Response, Response Time, Software, System Testing, Test cases, Types, Validation, Validation Phase
Subscribe by Email |
|
Tuesday, October 19, 2010
Validation phase - System Testing - Security Testing, Stress Testing, Performance Testing
Security Testing
Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. During security testing, password cracking, unauthorized entry into the software, network security are all taken into consideration. The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.
Stress Testing
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. The following types of tests may be conducted during stress testing are:
- Special tests may be designed that generate ten interrupts per second, when one or two is the average rate.
- Input data rates may increase by an order of magnitude to determine how input functions will respond.
- Test cases that require maximum memory or other resources.
- Test cases that may cause excessive hunting for disk resident data.
- Test cases that may cause thrashing in a virtual operating system.
Performance Testing
Performance testing of a web site is basically the process of understanding how the web application and its operating environment responds at various user load levels. In general, we want to measure the response time, throughput and utilization of the web site while simulating attempts by virtual users to simultaneously access the site. One of the main objectives of performance testing is to maintain a web site with low response time, high throughput, and low utilization.
Posted by Sunflower at 10/19/2010 12:54:00 PM 0 comments
Labels: Performance, Performance testing, Phases, Process, Quality, Resources, Security, Security Testing, Software, Stress testing, Test cases, Validation, Validation Phase
Subscribe by Email |
|
Monday, October 18, 2010
Software Localization - some details in terms of how the process work - Part 8
In the previous post of this series (Software Localization - some details in terms of how the process work - Part 7), I talked about how different countries can have certain requirements that are specific to those countries, and may not be easily understood by the product team that is typically working on the English version of the product. In this post, we talk about the matrix that is typically used for determining the number of locales in which the product is to be tested.
For any software applications that has gone through multiple versions, there will be many features that are built in previous versions and are not changed in the current version. In addition, typically, when an application is tested in languages other than English, the assumption is that functionally, the application tested in the English version would work fine in terms of its features, and the need to do functional testing in other language versions should be reduced.
In a previous post, I had already mentioned that both linguistic and functional testing in other languages can be more expensive than testing in English, and hence there needs to be some optimization of the testing effort in other languages (one would like to test all the application thoroughly in different languages, but costs add a big variable to this equation and need to be considered). As a result, what typically ends up happening is that an optimization matrix is drawn where the amount of testing on each language needs to be decided upon, and this also depends on various factors, including the importance of the respective language version on the overall sales of the product.
In the next post on this subject, I will write a continuation of this topic.
Posted by Ashish Agarwal at 10/18/2010 11:12:00 PM 0 comments
Labels: Localization Engineering, Localization testing, Software, Software Localization
Subscribe by Email |
|
Validation phase - System Testing - Usability Testing
Usability is the degree to which a user can easily learn and use a product to achieve a goal. It is the system testing that attempts to find any human factor problems. It is testing the software from a user's point of view. Essentially, it means testing software to prove or ensure that it is user friendly. It includes ergonomic considerations, screen design, standardization etc.
The idea behind usability testing is to have actual users perform the tasks for which the product was designed. If they can not do the tasks or if they have difficulty performing the tasks, the user interface is not adequate and needs redesigning. Usability testing is just one of the many techniques that serve as a basis for evaluating the user interface in a user-centered approach. Other techniques for evaluating a UI include inspection methods such as heuristic evaluations, expert reviews, card sorting, matching test or icon intuitiveness evaluation, cognitive walk-through. Confusion regarding usage of the term can be avoided if we use usability evaluation for the generic term and reserve usability testing for the specific evaluation method based on user performance.
It often involves building prototypes of parts of the user interface, having representative users perform representative tasks and seeing if the appropriate users can perform the tasks. In other techniques such as inspection methods, it is not performance, but someone's opinion of how users might perform that is offered as evidence that the UI is acceptable or not. This distinction between performance and opinion about performance is crucial. Opinions are subjective. Whether a sample of users can accomplish what they want or nit is objective. Under many circumstances it is more useful to find out if users can do what they want to do rather than asking someone.
The end goals of usability testing is to get:
- a better quality software.
- a software that is easier to use.
- a software that is more readily accepted by users.
- a method that shortens the learning curve for new users.
Posted by Sunflower at 10/18/2010 03:03:00 PM 0 comments
Labels: Design, Goals, Product, Quality, Software, System Testing, Usability, Usability testing, User Interface, Users, Validation, Validation Phase
Subscribe by Email |
|
Sunday, October 17, 2010
Software Localization - some details in terms of how the process work - Part 7
In the previous post on the topic of Software Localization - Part 6, we worked through some of the differences between the team that tests the English version of the product, and the team that tests the various other languages of the product.
In this post, I will talk in more detail about some of the differences in styles of the teams and their handling of bugs. To some extent, there is a difference in the way defects are visualized between the product team that works on the English versions of the product, and the team that works on the various language versions of the product. Consider an example, whereby there is a functional issue in how the formatting of the name of the user is depicted. In English, a person may be referred as Mr. Smith, while in another country, it is considered impolite to address a person the application with just the surname, and the name should be referred as Mr. John Smith.
In the normal case, when such a defect is reported, the product team may not really understand the importance of this issue and the defect may be prioritized as being of lower importance while for the team that wants to sell this in another country, addressing this issue is of high importance. There needs to be a mechanism to ensure that such defects are considered with the importance that they deserve, and are not deferred or closed. Such issues are typically prioritized lower unless there is a mechanism where the Bug review Committee has representatives from the various locales; it is normally important that the Product Management of the product is sensitive to the various nuances.
Posted by Ashish Agarwal at 10/17/2010 11:54:00 PM 0 comments
Labels: Localization, Localization Engineering, Localization testing, Software Localization
Subscribe by Email |
|
Validation Phase - System Testing - Compatibility Testing and Recovery Testing
System testing concentrates on testing the complete system with a variety of techniques and methods. System testing comes after unit testing and Integration testing.
Compatibility Testing
This testing concentrates on testing whether the application performs well with third party tools, software or hardware platforms. For example, a website should run on different kind of web browsers. Similarly, an application that is developed on a particular platform should run well on other platforms as well. This is the main goal of compatibility testing.
Compatibility tests are also performed for various client/server based applications where the hardware changes from client to client. This testing is very crucial to organizations developing their own products. The products have to be checked for compliance with the competitors of the third party tools, hardware, or software platform.
A good way to ensure compatibility is to have a few resources assigned along with their routine tasks to keep updated about such compatibility issues and plan for testing when and if the need arises.
Recovery Testing
It is a system test that focuses the software to fall in a variety of ways and verifies that recovery is properly performed. If it is automatic recovery then re-initialization, check pointing mechanisms, data recovery and restart should be evaluated for correctness. If recovery requires human intervention, the mean time to repair (MTTR) is evaluated to determine whether it is within acceptable limits.
Posted by Sunflower at 10/17/2010 03:00:00 PM 0 comments
Labels: Compatibility testing, Compatible, Integration testing, Platforms, Recovery Testing, Software, Software testing, System Testing, Techniques, Test cases, Unit Testing, Validation Phase
Subscribe by Email |
|
Saturday, October 16, 2010
Validation phase - Integration Testing - Top Down Integration and Bottom Up Integration
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. There are two methods of integration testing:
- Top-down integration approach
- Bottom-up integration approach
Top-down Integration Approach
It is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy beginning with the main control module. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
- The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
- Depending upon integration approach, selected subordinate stubs are replaced one at a time with actual components.
- Tests are conducted as each component is integrated.
- On completion of each set of tests, stub is replaced with real component.
- Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-Up Integration Approach
It begins construction and testing with atomic modules. Because components are integrated from bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
- Low level components are combined into clusters that perform a specific software sub function.
- A driver is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.
Posted by Sunflower at 10/16/2010 12:23:00 PM 0 comments
Labels: Approaches, Bottom Up, Components, Drivers, Function, Integration, Integration Testing, Levels, Phases, Software testing, Structure, Stubs, Techniques, Test cases, Top Down, Validation phase
Subscribe by Email |
|
Friday, October 15, 2010
Validation phase - Unit Testing - UTC Document, UTC Checklist, Defect Recording
UTC Document
The Unit Test Case document consists of test case number, test case purpose, the procedure, the expected result and the actual result. Columns like Pass/Fail and Remarks are also present in UTC.
UTC Checklist
UTC checklist may be used while reviewing the UTC prepared by the programmer. As any other checklist, it contains a list of questions which can be answered in yes or no. The 'aspects' list can be referred to while preparing UTC checklist.
- Are test cases present for all form field validations?
- Are boundary conditions considered?
- Are error messages properly phrased?
Defect Recording
It can be done on the same document of UTC, in the column of 'Expected results'. this column can be duplicated for the next iterations of unit testing. Defect recording can also be done using some tools like Bugzilla in which defects are stored in the database. Defect recording needs to be done with care. It should be able to indicate the problem in clear, unambiguous manner and reproducing of the defects should be easily possible from the defect formation.
To conclude, exhaustive unit testing filters out the defects at an early stage in the development life cycle. It proves to be cost effective and improves quality of the software before the smaller pieces are put together to form an application as a whole. Unit testing should be done sincerely and meticulously.
Posted by Sunflower at 10/15/2010 12:00:00 PM 0 comments
Labels: Defect Recording, Defects, Development, Phase, Phases, SDLC, Software testing, Test cases, Unit Test case, UTC, UTC checklist, Validation
Subscribe by Email |
|
Thursday, October 14, 2010
Validation phase - Unit Testing - how to write Unit test cases
Preparing a Unit test case document commonly referred as UTC is an important task in unit testing activity. Having a complete UTC with every possible test case leads to complete unit testing and thus gives an assurance of defect free unit at the end of unit testing stage.
While preparing unit test cases the following aspects should be kept in mind-
Expected functionality
Write test cases against each functionality that is expected to be provided from the unit being developed. It is important that user requirements should be traceable to functional specifications which should be traceable to program specifications which should be traceable to unit test cases. Maintaining such traceability ensures that the application fulfills user requirements.
Input Values
- Write test cases for each of the inputs accepted by the unit. Every input has certain validation rule associated with it. Write test cases to validate this rule.
- There can be cross-field validations in which one field is enabled depending upon input of another field. Test cases for these should not be missed.
- Write test cases for the minimum and maximum values of input.
- Variables that hold data have their value limits. In case of computed fields, it is very important to write test cases to arrive at an upper limit value of the variables.
- Write test cases to check the arithmetic expressions with all possible combinations of values.
Output Values
- Write test cases to generate scenarios which will produce all types of output values that are expected from unit.
Screen Layout
Screen/report layout must be tested against the requirements. It should ensure that pages and screens are consistent.
Path Coverage
A unit may have conditional processing which results in various paths, the control can traverse through. Test cases must be written for each of these paths.
Assumptions and Transactions
A unit may assume certain things for it to function. Test cases must be written to check that the unit reports error if such assumptions are not met.
In case of database applications, test cases should be written to ensure that transactions are properly designed and in no way inconsistent data gets saved in the database.
Abnormal terminations and Error messages
Test cases should be written to test the behavior of unit in case of abnormal termination.
Error messages should be short, precise and self explanatory. They should be properly phrased and free of grammatical mistakes.
Posted by Sunflower at 10/14/2010 01:50:00 PM 0 comments
Labels: Conditions, Coverage, Functionality, Inputs, Layout, Outputs, Paths, Phase, Phases, Report, Screen, Test cases, Unit, Unit testing, Validation, Values
Subscribe by Email |
|
Wednesday, October 13, 2010
Validation phase - Unit Testing - Stubs and Drivers
The validation phase falls into picture after the software is ready or when the code is being written. There are various techniques and testing types that can be appropriately used while performing the testing activities.
Unit Testing
A unit is allocated to a programmer for programming. Functional Specifications document is used as an input for programmer's work. The programmer prepares program specifications for his unit from the functional specifications. Program specifications describe the programming approach, coding tips for the unit's coding.
Using these program specifications as input, the programmer prepares unit test cases document for that particular unit. A unit test cases checklist may be used to check the completeness of unit test cases document. Program Specifications and unit test cases are reviewed and approved by quality assurance analyst or by peer programmer. The programmer implements some functionality for the system to be developed. The same is tested by referring the unit test cases. While testing that functionality if any defects have been found, they are recorded using the defect logging tool whichever is applicable. The programmer fixes the bugs found and tests the same for any errors.
Stubs and Drivers
A software application is made up of a number of units where output of one unit goes as an input of another unit. Due to such interfaces, independent testing of a unit becomes impossible. So here, we use stub and driver.
A driver is a piece of software that drives the unit being tested. A driver creates necessary inputs required for the unit and then invokes the unit.
A unit may reference another unit in its logic. A stub takes place of such subordinate unit during the unit testing. A stub is a piece of software that works similar to a unit which is referenced by the unit being tested but it is much simpler than the actual unit. A stub works as a stand-in for the subordinate unit and provides the minimum required behavior for that unit.
Programmers needs to create such drivers and stubs for carrying out unit testing. Both the driver and stub are kept at a minimum level of complexity, so that they do not induce any errors while testing the unit in question.
Posted by Sunflower at 10/13/2010 01:04:00 PM 0 comments
Labels: Application, Drivers, Input, Output, Phase, Phases, Programming, Quality, Software, Software testing, Stubs, Unit, Unit Testing, Validation
Subscribe by Email |
|
Tuesday, October 12, 2010
Some Black box testing techniques...Graph based testing methods, Error guessing and Boundary Value Analysis.
Graph Based Testing Methods
Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationship is exercised and errors are uncovered.Graph based testing begins with the definition of all nodes and node weights i.e. objects and attributes are identified.
Error Guessing
The purpose of error guessing is to focus the testing activity on areas that have not been handled by the other more formal techniques, such as equivalence partitioning and boundary value analysis. It comes with an experience of technology and the project. There are no specific tools and techniques for error guessing but you can write test cases depending on the situation. Error guessing will require tester to think out of the box and would involve intuition and experience of the tester.
Boundary Value Analysis (BVA)
It is a test data selection technique where the extreme values are chosen. It makes use of the fact that the inputs and outputs of the component under test can be partitioned into ordered sets with identifiable boundaries. A test engineer chooses values that lie along data extremes. It is expected that if the system works correctly for these special values, then it will work correctly for all values in between. The boundary value analysis test cases are obtained by holding the values of all but one obtained by holding the values of all but one variable at their nominal values, and letting that variable at their nominal values, and letting that variable assume its extreme values variable assume its extreme value.
There are two ways to generalize BVA technique :
- By the number of variables: for n variables, BVA yields 4n+1 test cases.
- By the kinds of ranges.
Advantages of Boundary Value Analysis
- Robustness Testing
- It checks values for min-1, min, nom, max-1, max, max+1
- It forces attention to exception handling.
Limitations of Boundary Value Analysis
BVA works best when the program is a function of several independent variables that represent bounded physical quantities.
Boundary value analysis works well when the program to be tested is a function of the program to be tested is a function of several severalindependent independent variables that variables that represent bounded represent bounded physical physicalquantities quantities.
Posted by Sunflower at 10/12/2010 12:29:00 PM 0 comments
Labels: Advantages, Analysis, Black box testing, Boundaries, Boundary Value Analysis, Defects, Error guessing, Errors, Graph based testing, Limitations, Maximum, Methods, Minimum, Techniques, Tests
Subscribe by Email |
|
Software Localization - some details in terms of how the process work - Part 6
In previous posts on the subject of localization, I have been writing about various processes and techniques employed in the process of localization, covering testing, and the processes used for tagging strings for localization. However, this post covers something different, the difference in teams between those who do overall product testing, and those who do the process of localization.
There is a big difference between the teams employed in the process of product testing and those who are involved in the process of localization. Teams involved in product testing are more in touch with functionality of the product, with the discussions related to the development of the functionality, the writing of test cases for these features, as well as the blackbox and whitebox testing of these features. The team is responsible for ensuring that the features work as well as they are supposed to and all bugs are shaken out of the system. It is the product team that finally certifies the product, and they typically do so for the English version of the product.
However, it is the localization team that is responsible for the certification of the various language versions of the product. The team does functional testing, but it is a reality that most of the functional bugs are found in the English testing of the product, and it is mostly localization bugs such as string corrections, layout issues, and wrong corrections that are found by the localization testing team. They would find the bugs that are mostly not needed to be fixed by the engineers on the product team, instead need to be fixed by localization engineers. Thus, the localization process is normally on a separate process from the product team processes.
Posted by Ashish Agarwal at 10/12/2010 12:40:00 AM 0 comments
Labels: Localization, Localization Engineering, Localization testing, Processes, Testing
Subscribe by Email |
|
Monday, October 11, 2010
What is Black box testing and what are its advantages and disadvantages ?
Black box testing is a test design method. It treats the system as a "black-box" so it does not explicitly use the knowledge of the internal structure. In other words, the test engineer does not require to know the internal working of the black box. Black box testing focuses on the functionality part of the module. Black box testing is also known as opaque box and closed box testing. While the term black box testing is more commonly, many people prefer the terms "behavioral" and "structural" for black-box and white-box respectively.
There are bugs that cannot be found using only black box testing or only white box testing. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.
The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality has not disabled previous functionality.
Advantages of Black Box Testing
- It is not important for the tester to be technical. He can be a non-technical person.
- This testing is most likely to find those bugs as the user would have found.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete.
Disadvantages of Black Box Testing
- There are chances of repetition of tests that are already done by the programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So, writing test cases is slow and difficult.
- There are chances of having unidentified paths during testing.
Posted by Sunflower at 10/11/2010 12:07:00 PM 0 comments
Labels: Advantages, Behavioral, Black box testing, Design, Disadvantages, Internal, Internal Structure, Software, Software testing, Structural testing, Testing tools, Tests, Tools
Subscribe by Email |
|
Sunday, October 10, 2010
What is Cyclomatic Complexity and how it is computed?
Cyclomatic complexity measures the amount of decision logic in a single software module. It provides a quantitative measure of the logical complexity of a program. It gives the number of recommended test for software. When used in the context of basis path testing method, the value computed for Cyclomatic complexity defines the number for independent paths in the basis set of a program and provides us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.
Control flow graphs describe the logic structure of software module. Each flow consists of nodes and edges. Nodes are computation statements or expresions.
Edges represent transfer of control between nodes.Each possible execution path of a software module has a corresponding path from the entry to the exit node of the module's control flow graph.
Computing Cyclomatic Complexity
Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of the three ways:
- The number of regions of the flow graph corresponds to the cyclomatic complexity.
- Cyclomatic complexity, V(G), for a flow graph, G is defined as:
V(G)= E-N+2
where E is the number of flow graph edges and N is the number of flow graph nodes.
- Cyclomatic complexity, V(G) for a flow graph, G is also defined as:
V(G)= P+1
where P is the number of predicate nodes contained in the flow graph G.
Posted by Sunflower at 10/10/2010 01:36:00 PM 0 comments
Labels: Computation, Control, Cyclomatic complexity, Edges, Flow graphs, Logical, Module, Nodes, Paths, Quality, Software, Software testing, Structure, Tests
Subscribe by Email |
|
Saturday, October 9, 2010
Loop testing - a white box testing technique and types of loop testing.
Loop testing is a kind of white box testing technique that focuses exclusively on the validity of loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, Nested loops, and unstructured loops.
- Simple Loops: The following sets of tests can be applied to simple loops, where 'n' is the maximum number of allowable passes through the loop.
a) Skip the loop entirely.
b) Only one pass through the loop.
c) Two passes through the loop.
d) 'm' passes through the loop where m < n.
e) n-1,n,n+1 passes through the loop.
- Nested Loops: If we extend the test approach from simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases.
a) Start at the innermost loop. Set all other loops to minimum values.
b) Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter values. Add other tests for out-of-range or exclude values.
c) Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and other nested loops to typical values.
d) Continue until all loops have been tested.
- Concatenated Loops: These loops can be tested using the approach defined for simple loops,If each of these loops is independent of the other. However, if two loops are concatenated and the loop counter for loop one is used as the initial value for loop two, then the loops are not independent.
- Unstructured Loops: Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs.
Posted by Sunflower at 10/09/2010 03:17:00 PM 0 comments
Labels: Concatenated Loops, Loop testing, Loops, Nested loops, Quality, Simple Loops, Software testing, Techniques, Unstructured loops, White box testing
Subscribe by Email |
|
Friday, October 8, 2010
What are the limitations and tools used for white box testing ?
In white box testing, exhaustive testing of a code presents certain logistical problems. Even for small programs, the number of possible logical paths can be very large. For example, a hundred line C program which contains tow nested loops executing 1 to 20 times depending upon some initial input after some basic data declaration. Inside the interior loop, four if-then-else constructs are requires. Then there are approximately 10^14 logical paths that are to be exercised to test the program exhaustively which means that a magic test processor developing a single test case, execute it and evaluate results in one millisecond would require 3170 years working continuously for this exhaustive testing which is certainly impractical. Exhaustive WBT is impossible for large software systems. But that does not mean WBT should be considered as impractical. Limited WBT in which a limited number of important logical paths are selected and exercised and important data structures are probed for validity, is both practical. It is suggested that white and black box testing techniques can be coupled to provide an approach that validates the software interface selectively ensuring the correction of internal working of the software.
Tools used for white box testing are :
Few test automation tool vendors offer white box testing tools which:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of application that have and have not been executed.
Posted by Sunflower at 10/08/2010 03:26:00 PM 0 comments
Labels: Automate, Glass Box testing, Limitations, Purpose, Quality, Software testing, Structures, Techniques, Tests, Tools, White box testing
Subscribe by Email |
|
Thursday, October 7, 2010
What is white Box Testing and why we do it ?
White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations are performed according to the specifications and all the internal components have been adequately exercised. In other words, white box testing tends to involve the coverage of the specification in the code.
The control structure of the procedural design to derive test cases is used during white box testing. Using the methods of WBT, a tester can derive the test cases that guarantee that all independent paths within a module have been exercised at least once, exercise all logical decisions on their true and false values, execute all loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity.
White box testing is done because black box testing uncover sorts defects in the program. These defects are:
- Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement functions, conditions or controls that are out of the program.
- The logical flow of the program is sometimes counter intuitive, meaning that our unconscious assumptions about flow of control and data may lead to design errors that are uncovered only when path testing starts.
- Typographical errors are random, some of which will be uncovered by syntax checking mechanisms but others will go undetected until test begins.
All we need to do in white box testing is to define all logical paths, develop test cases to exercise them and evaluate results i.e. generate test cases to exercise the program logic exhaustively. We need to know the program well, the specifications and the code to be tested, related documents should be available to us.
Posted by Sunflower at 10/07/2010 12:53:00 PM 0 comments
Labels: Bugs, Code, Code Coverage, Defects, Design, Errors. Specification, Glass Box testing, Internal, Paths, Procedural, Product, Software, Software testing, Structure, Tests, White box testing
Subscribe by Email |
|
Wednesday, October 6, 2010
How to choose a black box or a white box test?
White box testing is concerned only with testing the software product; it cannot guarantee that the complete specification has been implemented. Black box testing is concerned only with testing the specification; it cannot guarantee that all parts of the implementation have been tested. Thus, black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. In order to completely test a software product both black and white box testing are required.
White box testing is much more expensive in terms of resources and time as compared to black box testing. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is correct or incorrect. It is advised to start test planning with a black box testing approach as soon as the specification is available. White box tests are to be planned as soon as the low level design (LLD) is complete. The Low Level Design will address all the algorithms and coding style. The paths should then be checked against the black box test plan and any additional required test cases should be determined and applied.
The consequences of test failure at requirements stage are very expensive. A failure of a test case may result in a change, which requires all black box testing to be repeated and the re-determination of the white box paths. The cheaper option is to regard the process of testing as one of the quality assurance rather than quality control. The intention is that sufficient quality is put into all previous design and production stages so that it can be expected that testing being relied upon to discover any faults in the software, as in case of quality control.
Posted by Sunflower at 10/06/2010 01:50:00 PM 0 comments
Labels: Black box testing, Bugs, Choice, Fault, Organization, Product, Quality, Resources, Software, Software testing, Specification, Test, White box testing, White box testing. Black box testing
Subscribe by Email |
|
Tuesday, October 5, 2010
Software Localization - some details in terms of how the process work - Part 5
In the previous post in this series (how to get the right resources for localization purposes), we explored how to ensure that we have the right set of people for the translation process. In this post, we talk more about a specific type of testing that is required to determine whether the software is ready for translation.
In previous posts, we have talked about how the software localization process depends on ensuring that all the strings in the code are tagged in a way that they can be extracted and then sent for translation. However, since this tagging of the strings needs to be done by the development team as and when these strings are added, it is very much possible that some mistakes could be done during the process of adding the relevant tag information to the strings; and that these mistakes could actually end up being found out much later in the cycle.
A testing process that could determine these problems much earlier in the cycle is called 'mocked testing', a process that ensures that the software is checked periodically to see whether there are strings that are not properly tagged. What happens is that the software is validated in the various languages and the dialogues inspected to see whether the strings are showing up properly, or not. Such a process helps in ensuring that any mistakes that are made during the tagging of the strings is caught early; else it is very much possible that when the actual testing happens much later in the cycle, the problem is caught then and it becomes more expensive to make the fix.
However, this effort needs to be properly planned, since it is something that requires effort from the internationalization team as well as the testing team.
Posted by Ashish Agarwal at 10/05/2010 11:16:00 PM 0 comments
Labels: Localization, Localization Engineering, Localization testing, Software, Software Localization
Subscribe by Email |
|
Testing Types and Testing Techniques and the difference between them.
Testing types refer to different approaches towards testing a computer program, system or product. The two types of testing are black box testing and white box testing. Another type, termed as gray testing or hybrid testing combines the features of both black box and white box testing.
Testing Techniques
Testing techniques refer to different methods of testing particular features a computer program, system or product. Each testing type has its own testing techniques while some techniques combine the features of both types. Some techniques are :
- error and anomaly detection technique.
- interface checking.
- loop testing.
- basis path testing.
- physical units testing.
- domain testing.
- random testing.
- graph based testing.
- error guessing.
- control structure testing.
- boundary value analysis.
- instrumentation based testing.
- equivalence partitioning.
Difference between testing types and testing techniques
Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested. That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to the specifications.
On the other hand, testing techniques means what methods or ways would be applied or calculations would be done to test a particular feature of a software.
Posted by Sunflower at 10/05/2010 12:22:00 PM 0 comments
Labels: Differences, Organization, Process, Product, program, Quality, Software, Software testing, Techniques, Types
Subscribe by Email |
|
Monday, October 4, 2010
Verification Strategies - Overview of Inspections
Inspections are static analysis techniques that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include :
- code inspection
- design inspection
- architectural inspections
- test ware inspections
The participants in inspections include inspection leader, recorder, reader, author, inspector. All participants in the review are inspectors. The author should not act as an inspection leader, reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role. Individuals holding management positions over nay member of the inspection team shall not participate in the inspection.
Input Criteria includes:
- Statement of objectives for the inspection.
- The software product to be inspected.
- Documented inspection procedure.
- Inspection reporting forms.
- Current anomalies or issues list.
- Inspection checklists.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Hardware product specifications.
- Hardware performance data.
- Anomaly categories.
The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate re-work and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
- Accept with no or minor re-work : The software product is accepted as is or with only minor re-work.
- Accept with re-work verification : The software product is to be accepted after the inspection leader or a designated member of the inspection team verifies re-work.
- Re-inspect : Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection.
Posted by Sunflower at 10/04/2010 02:36:00 PM 0 comments
Labels: Development, Inspections, Participants, Problems, Product, Software, Software testing, Strategies, Strategy, Techniques, Verification, Verify
Subscribe by Email |
|
Sunday, October 3, 2010
Verification Strategies - Overview to Walkthroughs.
Walkthrough is a static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
the objectives of Walkthrough can be summarized as follows:
- Detect the errors early.
- Train and exchange technical information among project teams which participate in the walkthrough.
- Increase the quality of the project, thereby improving morale of the team members.
The participants in walkthroughs assume the role of a walk-through leader, recorder, author and a team member.
To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the tam members. the walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.
Input to the walk-through includes:
- A statement of objectives for the walk-through.
- The software product being examined.
- Standards that are in effect for the acquisition, supply, development, operation and/or maintenance of the software product.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Anomaly categories.
The walk-through shall be considered complete when
- The entire software product has been examined.
- Recommendations and required actions have been recorded.
- The walk-through output has been completed.
Posted by Sunflower at 10/03/2010 11:58:00 AM 0 comments
Labels: Analysis, Code, Inputs, Programmer, Software, Software testing, Strategies, Strategy, Technical Reviews, Verification, Walkthroughs
Subscribe by Email |
|
Friday, October 1, 2010
Verification Strategies - Reviews - Design Review and Code Review
Design Review: A process or meeting during which a system, hardware, or software design is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include critical design review, preliminary design review and system design review.Quality assurance team member leads design review. members from development team and QA team participate in the review.
Input Criteria: Design document is an essential document for the review. A checklist can be used for the review.
Exit Criteria: It includes the filled and completed checklist with the reviewers comments and suggestions and the re-verification whether they are incorporated in the documents.
Code Review: A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. QA team member(in case the QA team is only involved in black box testing then the development team lead chairs the review team) leads code review. Members from development team and QA team participate in the review.
Input Criteria: The Coding Standards Document and the Source file are the essential documents for the review. A checklist can be used for the review.
Exit Criteria: It includes the filled and completed checklist with the reviewer's comments and suggestions and the re-verification whether they are incorporated in the documents.
Posted by Sunflower at 10/01/2010 11:02:00 AM 0 comments
Labels: Bugs, Code, Code Review, Criteria, Defects, Design, Design Review, Organization, Process, Reviews, Software, Software testing, Strategies, Verification
Subscribe by Email |
|