Subscribe by Email


Showing posts with label Verification. Show all posts
Showing posts with label Verification. Show all posts

Friday, February 15, 2013

What are different Web Functional/Regression Test Tools?


As functional and regression testing is important for many software systems, the same way it is important for the web applications to undergo the functional and regression testing. At present we have a number of tools available for web functional/ regression testing.  

In this article we shall discuss many such tools available:

  1. ManageEngine QEngine: This tool is for functional testing and load testing of the web applications. This tool enables you to carry out GUI testing in minutes and below are some of its features:
Ø Portability: With this feature you can record scripts in windows and play them in linux without any need of recreation.
Ø  Scripting capabilities: These are simplified script creation, keyword driven testing, data – driven testing, object repository, Unicode support.
Ø  Playback options: It includes playback synchronization, chain scripts, multiple playback options etc.
Ø  Validation and verification: The tool comes with a rich library of built – in functions for constructing function calls for requirements such as dynamic property handling, database handling, and screen handling and so on.
Ø   AJAX testing
Ø  Reporting capabilities: The tool provides you with clear and powerful reports indicating the status of the execution of the test.

  1. SeleniumHQ: This tool has a number of smaller projects that combined to create a testing environment to suit your needs:
Ø  Selenium IDE: This one’s an add – on for firefox and can be used for replaying the tests in the same.
Ø  Selenium remote control: Web browsers can be controlled with this client/ server system located on either a local host or a remote one.
Ø  Selenium grid: This is the same as the previous one but can handle multiple servers at a time.
Ø  Selenium core: Testing system based on JavaScript.
Ø Further specific selenium projects have been developed for ruby, eclipse and rails.

  1. Rapise: It was developed by Inflectra inc. This tool has an extensible architecture and cross browser testing capabilities. It supports various versions of Mozilla Firefox, chrome, MS internet explorer and so on. The tool comes with built – in support for AJAX, YUI, GWT, AIR, Silverlight, flash/ flex etc. one can use this tool for carrying out keyword as well as data driven testing through excel spreadsheets. The tool identifies the objects based up on CSS and Xpath. For bitmaps, the tool comes with a built – in OCR (optical character recognition). It uses JavaScript for scripting purposes and therefore also has a JavaScript editor.
  2. funcUnit: This is an open source web application testing framework. The API is based up on JQuery. Almost all the modern browsers are supported on Linux and MAC. Selenium can also be used for executing the tests. It can simulate various user input events clicking, typing, and dragging the mouse and so one. 
  1. qUnit: Any JavaScript code that is generic can be tested by this tool. It is somewhat similar to the JUnit but operates up on JavaScript features. 
  1. EnvJS: This is a simulated browser environment and an open source tool whose code has been written in javascript. 
  1. QF – test: It has been developed by quality first software  as tool for cross  - platform testing and cross browser automation of the web applications. It can be used to test web applications based up on html, ajax, GWT, exTJS, richfaces, Qooxdoo, java and so on. The tool has got some small scale capabilities for test management, intuitive user interface, extensive documentation, capture/ playback mechanism, component recognition and so on. It can be used for handling both custom and complex GUI objects. It has got customizable reporting and integrated test debugger system. 
  1. Cloud testing service: This enables the cloud capabilities to be utilized while web testing. It has been developed by the cloud testing limited. Here, web functionality can be recorded via selenium IDE and a web browser. The scripts can be uploaded to the cloud testing website.




Wednesday, January 16, 2013

What kinds of functions are used by Cleanroom Software Engineering approach?


Harlan Mills and his colleagues namely Linger, Poore, Dyer in the year of 1980 developed a software process that could promise building zero error software at IBM. This process is now popularly known as the Cleanroom software engineering. The process was named in accordance with an analogy with the manufacturing process of the semiconductors. 

The Clean room software engineering process makes use of the statistical process and its control features. The software systems and applications thus produced have certified software reliability. The productivity is also increased as the software has no defects at delivery. 
Below mentioned are some key features of the Cleanroom software engineering process:
  1. Usage scenarios
  2. Incremental development
  3. Incremental release
  4. Statistical modeling
  5. Separate development
  6. Acceptance testing
  7. No unit testing
  8. No debugging
  9. Formal reviews with verification conditions
Basic technologies used by the CSE approach are:
  1. Incremental development
  2. Box structured specifications
  3. Statistical usage testing
  4. Function theoretic verification
- The incremental development phase of the CSE involves overlapping of the incremental development and from beginning of specification to the end of the test execution it takes around 12 – 18 weeks.
- Partitioning of the increments is critical as well as difficult. 
Formal specification of the CSE process involves the following:
  1. Box structured Designing: Three types of boxes are identified namely black box, state box and clear box.
  2. Verification properties of the structures and
  3. Program functions: These are one kind of functions that are used by the clean room approach.
- State boxes are the description of the state of the system in terms of data structures such as sequences, sets, lists, records, relations and maps. 
- Further, they include specification of operations and state in-variants.
- Each and every operation that is carried out needs to take care of the invariant. 
- The syntax errors present in a constructed program in clean-room are checked by a parser but is not run by the developer.
- A team review is responsible for performing verification which is driven by a number of verification conditions. 
- Productivity is increased by 3–5 times in the verification process as compared to the debugging process. 
- Proving the program is always an option with the developers but it calls for a lot of math intensive work.
- As an alternate to this, clean room software engineering approach prefers to use a team code inspection in terms of two things namely:
  1. Program functions and
  2. Verification conditions
- After this, an informal review is carried out which confirms whether all conditions have been satisfied or not. 
- Program functions are nothing but functions describing the prime program’s function.

- Functional verification steps are:
1.    Specifying the program by post and pre-conditions.
2.    Parsing the program in to prime numbers.
3.    Determining the program functions for SESE’s.
4.    Defining verification conditions.
5.    Inspection of all the verification conditions.
- Program functions also define the conditions under which a program can be executed legally. Such program functions are called pre-conditions.
- Program functions can even express the effect the program execution is having up on the state of the system. Such program functions are called the post conditions.
- Programs are mostly expressed on terms of the input arguments, instance variables and return values of the program. 
- However, they cannot be expressed by local program variables. 
- The concept of nested blocks is supported by a number of modern programming languages and structured programs always require well nesting. 
- The process determining SESE’s also involves parsing rather than just program functions.


Tuesday, January 8, 2013

What is Cleanroom Software Engineering?


Cleanroom software engineering is one of the fastest emerging software development processes and has been designed for the production of the software systems and applications with a reliability of certificate level. The credit for the development of this process goes to Harlan Mills and a couple of his colleagues among which one was Alan Hevner at the IBM Corporation. 

What is Cleanroom Software Engineering?

- The cleanroom software engineering is focused on defect prevention rather than their effective removal. 
- The process was named so since the word cleanroom evoked the sense of cleanrooms that are used by the electronics industry for preventing the defects from entering the semiconductors during the fabrication process. 
- The first time when the cleanroom process was used was in the late 80s. 
- This process began to be used for military demonstration process in the early 1990s. 

Principles of Cleanroom Approach

Cleanroom process has its own principles which we have discussed below:
  1. Development of software systems and applications based up on formal methods: Box structure method is what that is used by the cleanroom development for specifying and designing a software product. Later, team review is used for carrying out verification of the design i.e., whether it has been correctly implemented or not.
  2. Statistical quality control through incremental implementation: An iterative approach is followed in the cleanroom software engineering process i.e., the software system is evolved through increments in which the implemented functionality gradually increases. Pre–established standards are used for measuring the quality of all the increments for making verification that the process is making acceptable process. In case the process fails to meet the quality standards, testing of the current increment is stopped and the process is returned to the designing phase.
  3. Statistically sound testing: Software testing in cleanroom development process is carried as a disguise of a statistical experiment. A subset that represents software’s i/p and o/p trajectories is selected and then subjected to testing. The sample so obtained is  then considered for statistical analyzation so as to get an estimation  of the software’s reliability and level of confidence.

Features of Cleanroom Software Engineering

Software products developed using the cleanroom software engineering process have zero defects at the delivery time. Below mentioned are some of the characteristics features of the cleanroom software engineering:
  1. Statistical modeling
  2. Usage scenarios
  3. Incremental development and release
  4. Separate acceptance testing
  5. No requirement of unit testing and debugging
  6. Formal reviews with verification conditions
The defects rate was recorded as follows:
  1. <3 .5=".5" and="and" delivered="delivered" kloc="kloc" o:p="o:p" per="per">
  • 2.7 Per KLOC between first execution and first delivery.
  • Basic technologies thus used can be listed as:
    1. Incremental development: Each increment is carried out from end – to – end and in some cases there is overlapping development of the increments. This whole process takes around 12 – 18 weeks and partitioning though being critical proves to be difficult.
    2. Function – theoretical verification: A parser may check the constructed program for syntax errors but it cannot be executed by the program developer. Verification conditions drive the team review for verification. Verification is improved by 3- 5 times than debugging. Formal inspections also fall under this category.
    3. Formal specifications: This further includes:
    a)    Box structured specification: It includes 3 types of boxes namely:
    Ø  Black box
    Ø  State box
    Ø  Clear box
    b)    Verification properties
    c)    Program functions
    1. Statistical usage testing: It helps in implementing cost effective orientation and process control. It provides a stratification mechanism to deal with situations that are critical. 


    Sunday, December 16, 2012

    What are Six Best Practices in Rational Unified Process?


    The IBM Rational Unified Process is a means of commercial deployment of the approaches and practices which have been proven for the development of the software systems and applications. It is based up on the following six best practices:
    1. Iterative development of the software systems and applications
    2. Management of the requirements
    3. Use of architecture based up on the components.
    4. Visual modeling of the software system
    5. Verification of the software system.
    6. Controlling the changes to the software system or application.
    The above mentioned practices are called the best practices not because their value can be precisely quantified but because they are quite common in the software industry by most of the organizations which are successful and reputable.
    In the rational unified process each and every member of the team gets templates, guidelines as well as the tools which are found necessary for the whole of the team in order to reap the full advantage.

    Basic Practices In Rational Unified Process in Detail

    Iterative development of the software systems or applications:  
    Software systems and applications are quite sophisticated and therefore they make it impossible to define the problem first in sequence.
    - By sequence we mean, first defining the whole problem, designing a solution of the problem, building the software system or application and then finally testing the software system. 
    - In order to deal with such software systems and applications there is a requirement of an iterative approach so that an increase in the understanding of the problem can be made in a series of successive refinements. 
    - This also helps in developing an effective solution in increments done over multiple iterations.

    Management of the Requirements: 
    The rational unified process gives a description:
    - On how the elicitation, organization and documentation of the constraints as well as the functionality is to be done, 
    - how the trade-offs and decisions have to be tracked and documented and 
    how the business requirements are to be captured and communicated.

    Use of architecture based up on the components: 
    - The focus of the development process is on the base-lining and early development of an architecture that is robust and executable as well. 
    - It gives a description of how a resilient architecture can be built with more flexibility that can accommodate the changes easily, can be easily understood and effectively promotes the reuse of the existing software artifacts. 
    - The rational unified process provides a great support to the component based development. 
    - By components we mean, the sub systems and non – trivial elements for a clear function.

    Visual modeling of the software system: 
    - The rational unified shows you exactly how a software system or application can be visually modeled and can be used for capturing the behavior and structure of its architectural components. 
    - This further enables you to hide the details and develop the code with the help of the graphical building blocks. 
    - With such visual abstractions, communication can be established between the different aspects of the software system or application.

    Verification of the software system: 
    - Poor reliability dramatically cuts down the chances of a software system or application from being accepted.
    - Therefore it is important to review the quality concerning the factors namely functionality, reliability, system performance and application performance etc.

    Controlling the changes to the software system or application: 
    - The management and ability to track the changes are critical to the success of any software system or application. 
    - However, the rational unified process helps you to cope with these issues also.




    Thursday, December 6, 2012

    What is script assure technology in IBM rational functional tester?


    Script assure technology is another key technology used by the IBM’s rational functional tester which is an automated tool developed for functional testing by the IBM’s rational software division. 

    - Rational functional tester is usually employed by quality assurance people in order to carry out the regression testing. 
    - The test scripts are created with the help of a sophisticated test recorder which includes capturing the actions of the users against the AUT or application under test. 
    - From these captured actions, a test script is created by the recording mechanism which is based on .net or java applications. 
    - When the version 8.1 of the rational functional tester was released, the scripts started to be represented as a series of the screen shots from a story board that is of a visual nature. 
    - The created script is further enhanced using the syntax and standard commands of the language.
    - These scripts are then run for the validation of the software system’s or application’s functionality. 
    - To say the test scripts are executed in the batch mode so that the test scripts can be grouped together and executed unattended. 
    - During the phase of recording, the verification points are introduced by the user for capturing the system with its expected state. 
    - Any information regarding the bugs is stored in the logs of the rational functional tester. 
    - While the play back process is in progress, an object map is used by the rational functional tester for finding and acting against the interface of the application. 
    - However, it is possible that during the development phase the objects might be changed between the time that was taken for the recording of the script and for executing the script. 
    - The script assure technology allows the rational functional tester to ignore the discrepancies between the definitions of the objects that were captured during the recording as well as the playback in order to ensure that there is an interrupted execution of the test scripts. 
    - This is a factor called the script assure sensitivity which determines the size of the object map discrepancy that is acceptable and this factor can be set by the user. 
    - It has been found that developing automated scripts for carrying out the regression testing of the dynamic content of the web pages such as GUI applications is difficult for the testers who are used to IBM rational functional tester.
    - Testers tend to develop scripts that are quite re-silient and can be used for testing the values of the dynamic object properties which are not known to have sufficient unique properties even though having sufficient properties lead to problems in the recognition and thus leading to several failures and errors. 

    If you understand properly that how the IBM rational functional tester works, its advantages, and how the objects can be recognized during the run time, you can very well develop the scripts that can be used to cope up with the changes and provide results of the regression testing that are informative enough. 
    Many of the testers who are freshly introduced to the rational functional tester find the difficulties in the creation of the resilient scripts while simultaneously automating the web based applications. 

    With the help of the script assure technology, the scripts can be subjected to play back in the rational functional tester by using the script assure feature which will also help in controlling the object matching sensitivity. 
    Object matching sensitivity is a function that relies on a number of factors for the recognition of the objects present in that application. It is important that the properties that were recorded in the object map must match with the object properties so that the properties could be recognized by the rational functional tester.

    It is by default that the rational functional tester might recognize an object even if some properties do not match. If a match is not found between the two properties, the object in the application cannot be recognized by the rational functional tester. 


    Wednesday, August 22, 2012

    When do you use Verify/Debug/Update Modes? (in Winrunner)


    After you have developed the test scripts and finalized your test case, your next step is to run that particular test in order to check the behaviour of your software system or application. Whenever a test is executed using the winrunner, line by line the whole test is interpreted by the winrunner.  
    As the TSL statements are interpreted line by line they are marked by an execution arrow which is visible in the left margin of the test script. As the test continues to be executed your software system or application is executed as if it is being controlled by a person. 

    The winrunner provides 3 modes for the running your tests namely:
    1. Verify run mode
    2. Debug run mode
    3. Update run mode
    In this article we talk about the above mentioned three different types of winrunner run modes. 
    - The first mode i.e., the verify run mode checks the application. 
    - The second one i.e., the debug run mode debugs the test scripts.
    - The third one i.e., the update run mode updates the expected results. 
    - Only two modes i.e., the debug run mode and the verify run mode are available when you are using the winrunner run-time.
    - Any one of these modes can be chosen from the list on the test tool bar. 
    - The verify run mode represents the default run mode in winrunner. 
    - You can either run the entire test or just a portion of it using the test and debug menu commands. 
    - But always make sure all the necessary GUI map files have been loaded before you start with a context sensitive test. 
    - You also have the choice of running individual tests or a group of tests using a batch test. 
    - Batch test seems to be quite useful when you have very long tests to be executed and you need an overnight run. 

    Now we will discuss about all the three run modes in detail one by one:
    Verify run mode: 
    In this mode the current response of your software system or application is compared to the expected response by the winrunner.
    - The results of this run mode are called verification results and enlist all the discrepancies that might have been observed in the current response and the expected response. 
    - When the execution of the test stops, the verification results window is by default opened for the user to see. 
    - As many sets as required of the verification results can be obtained. 
    - However, you should always be ready with the expected results for the check points that you created earlier. 
    - If there is any requirement for updating the expected results you just need to run the test in update mode.

    Debug run mode: 
    - This mode also helps in rooting out many of the bugs that might be residing in a test script. 
    - The execution of a test in verify mode as well as debug is almost same, the only difference being in the folder in which the results are saved. 
    - In this case the test results are saved in the debug folder. 
    - Also, here only one set of debug results is stored and so the folder does not opens automatically for the user to view. 
    - In this mode, the thing to be taken care of is that the time out variables must be changed to zero while the debugging 0of the test scripts take place.

    Update run mode: 
    - This mode helps in the updating of the expected results as well in the creation of a new expected results folder.
    - Results for a GUI check point can also be updated and an additional set of expected results can also be created. 


    Tuesday, July 24, 2012

    What are virtual users? For what purpose are virtual users created?


    For the software developers it is quite frustrating to see their software systems and applications crashing soon after installing them. 
    What impression such a failure of the software system or application will have on the user or the customer!
    It is obvious that the customers and the users may think that the software product has not been through sufficient testing before the software product was released to them. 
    It has become a standard in the world of software engineering that any organization developing a software system or application must follow a defined standard procedure for testing that software product in order to ensure its reliability as well as quality before it is shipped to the users or customers. This whole testing process is of the quality assurance. 

    For every software product there might be a 100 ways in which it may be used by the user. Therefore, the software system or application must be checked in all these possible ways for the verification of whether or not the software product is working fine.
    There are a number ways using which a software testing process can be made more effective then it is actually. 
    - One of such ways is use of automated testing tool in which suites of tests as well as specific tests can be set up and can be run by the computer. 
    - Number of hours of drudgery are reduced by a huge margin as well as time and money. 
    - The saved energy and time can be re-focused upon the tasks that call for more human interaction. 
    - This whole process makes the software product very much reliable.
    - It is always a good idea to verify the quality of the software artifact before it is shipped out. 

    One of the automated testing tools is the virtual user than can help in this regard. In this article we talk about virtual user and with what purpose this tool has been created. 

    What is a Virtual User and Purpose of Virtual User?


    - Virtual user in abbreviated form is known as VU and is a kind of tool that helps a computer in emulating a human user i.e., it helps in performing actions like typing of words, commands and clicking of mouse and so on. 
    - The computer on which the virtual user is installed actually acts as a host and takes the control of the other under itself just as a human tester would do. 
    - One of the targeted computer systems is asked to act as an agent and receive instructions from the host computer. 
    - The environment of a virtual user is constituted by an application whose work is to compile and run the scripts. 
    - The scripts of the virtual user cannot be edited in common editors rather it has to be edited in editors like MPW or BBEDIT. 
    - Through the virtual user, all the computers are linked over a network. 
    - The minimum requirements of the virtual user are:
    1. Virtual user software package and
    2. 2 systems one as host and the other one as target.
    - Today many firms have launched their virtual user software packages. 
    - Though the virtual user sounds like it’s a very good automating tool but it has got many drawbacks :

    • It cannot tell you when something looks right on the test screen or it cannot tell you how a particular icon looks or what position it is at. 
    • The biggest draw back is that the virtual user has got no intelligence of its own and if by chance a crash occurs in the machine, virtual user will keep trying to run the script regardless of the crash.  


    Friday, June 22, 2012

    What are different tools used for smoke testing?


    The success of a software testing methodology depends a lot on the quality and type of tools that are being used to carry out that particular test. 
    In this article we have discussed about the tools that are used for smoke testing. One can find so many tools around that are available for automating the smoke tests. 

    Functions performed by these tools


    Using these tools a lot of functions can be performed few of which have been mentioned below:
    1. Using these smoke testing tools one can easily and quickly write the test cases and scripts as well as automate them and guess what? These tools do not require any programming.
    2. With these smoke testing tools, the validation of all the builds can be carried out easily before any changes are introduced in to them.
    3. Nothing comes as handy as these smoke testing tools when one needs to stabilize the whole build process.
    4. The available tools make it easy the verification of a build to determine its readiness for further full scale testing.
    5. The smoke testing tools help in conducting lightening fast tests determining whether or not the major functionalities are intact and still working as expected.
    6. The tools help in reducing the drudgery of the testers as well as the developers by helping in the early detection of the problems and errors.
    7. These tools help a great deal in cutting down the overall development costs by significant margins providing the team with sufficient time period for developing the software system or application.

    Tools for Smoke Testing


    - So many tools are available using which you can perfect your tests like advanced web recorder, object recorder, editor and so on. 
    - The tools provide so many actions to be performed for editing the tests if it is required to make any changes in the software system or application. 
    - Such an approach to testing i.e., using tools allows flexibility between the test and testers and eliminates the need of creating new tests. 
    - There are tools called “work flow designers” that come with an intuitive graphical representation and using which you can easily create as well as manage the work flows of high level of the entire suite of your smoke tests. 
    - There are testing tools available that help you introduce check points in the application so that at any point your software application functionality can be further verified. 
    - There is one tool available for testing methodologies that is perhaps the most sought after testing tool. 
    - This tool lets you convert a test script in to an executable code and then run it on multiple systems without installing testing application on each and every machine. 
    - This feature of this tool facilitates an automation that never existed before. 
    - Any of the test scripts can be run and with any inputs operating from any data base or application software and almost in every kind of environment. 
    - Some reporting and log tools have been introduced which possess advance reporting features like:
    1. Time lines
    2. Report customization tools that are quite easy to use.
    3. Report dash board
    4. Audit trail capability (this feature provides you with a complete data base of all the events that took place in the automated testing environment.)
    5. Visual logs
    - With a tool having all such features a tester can easily analyze the reports with visual aids and fix them quickly.
    - These tools provide the testers with a more complete view of the smoke test runs. 
    - Though smoke testing can be carried out manually also it is advisable to use automated test tools which initiate tests by a similar process that generates the build itself.


    Tuesday, June 19, 2012

    What are different characteristics of build verification test?


    Build verification test is often abbreviated as BVT and can be defined as a set of tests that are carried out on all the builds that are newly built in order to verify if those builds are testable or not before they are transferred to the testing team for their further testing. 
    Generally, the test cases used in build verification test are considered to be the core functionality test cases which are used to keep the stability of the software systems or applications in check and regulate their testing thoroughly. 
    The whole process of build verification test takes a whole lot of efforts and time if carried out manually and therefore the whole process is usually automated. If a build fails the build verification, then the same build is again returned to the developer to fix the faults. 

    There are other names also by which the build verification test is known as mentioned below:
    1. Smoke testing
    2. Build acceptance testing or BAT
    In a typical build verification test, there are two aspects that are exclusively tested and are mentioned below:
    1. Build acceptance
    2. Build validation

    Few basics of Build Verification Tests


    1. Build verification tests are a sub set of tests that are used for the verification of the main functionalities.
    2. Some build verification tests are created on a daily basis and some builds are daily tested and if those builds fail the build verification test, they are rejected and returned back to their developer for making the fixes and when the fixes have been done, a new build is released and is gain subjected to the build verification test.
    3. The build verification test has an advantage that it saves the precious efforts of the testing team that are required for setting up a test and testing a build whenever there is a break in the major functionality of the build.
    4. The test cases of the build verification test should be designed very carefully so that they provide the maximum possible coverage to the basic functionality of the build.
    5.  A typical build verification test is run for 30 minutes maximum and not then this limit.
    6. The build verification testing can also be considered as a type of regression testing that is done on each and every build that is new.

    Aim of Build Verification Test


    - The primary aim of the build verification test is to keep a check on the integrity of the whole software system or application in terms of its build or we can say modules.
    - When several development teams are working together on the same project, it is important that the modules that they are developing individually have got good ability for integrating with each other since this is very important. 
    Till now so many cases have been recorded in which the whole project failed miserably due to a lack of integration among the modules. There are some worst cases also in which the whole project gets scraped just because of the failure in the module integration. 
    - The build release has got a main task i.e., file check in i.e., including all the modified as well as new project files associated with the corresponding builds. 
    - Earlier checking the health of the building initially was considered to be the main task of the build verification test. 
    - This is called as “initial build health check” and it includes:

    1. Whether or not all the files have been included in the release or not?
    2. Whether all the files are in their proper format or not?
    3. Whether all the file versions and languages have been included or not?
    4. Whether the appropriate flags have been associated with the file or not?


    Facebook activity