Subscribe by Email


Showing posts with label Verify. Show all posts
Showing posts with label Verify. Show all posts

Tuesday, August 28, 2012

What are all the default codes WinRunner generates when you start an application?


Automation might be quite a headache but it helps in decreasing the execution time by a great margin and in a way it saves a large number of precious man hours. Once you are done with the default installation of the winrunner you will observe that a winrunner directory has been created under programs. 
This folder consists of the following:
  1. Winrunner executable
  2. Uninstall winrunner
  3. Soft key configuration
  4. Read me
  5. Fonts expert
  6. Sample application folder and
  7. Documentation folder etc.
The purpose of the winrunner is that it has been designed for performing functional testing and regression testing. Take a moment to study some sample application saved in the folder “sample application”. Now let us take a look at the winrunner application! 
- A tool bar is there which contains options such as new, open, save and list box for run modes namely:
  1. Verify
  2. Debug and
  3. Update
- Plus a red button for recording purpose, green arrow for running the test scripts from line 1 and a purple line pointer for running scripts from the current line. 
- Apart from this there are start, stop and pause buttons. 
- Rest of the options on this tool bar are for debugging purpose.  

Now let us begin with the recording of a simple script! 
- You open the application software and winrunner as well. 
- Re-size both the windows such that both of them are visible to you and they do not overlap. 
- Start recording by clicking on the record button and perform some specific operations (like entering some user id and password of the start up page is login page) on the start up screen of the application. 
- After performing some 3-4 operations click on the stop button of the winrunner application and the recording will be stopped. 
- After this, the scripts can be played back and if you might have observed there is a code that the winrunner might have developed for you. 
- Now, after saving the script take a look at the code which some what looks like this:
        #startup window
        Set_window(“login”,3);
        Edit_set(“user id”,”active);
        Password_edit_set(“password”,”*****”);
        Button_press(“sign in”);
        And the code continues.

- Like the above code, the default codes are generated by the winrunner up on the start up of an application software.
- The symbol “#” marks the comments in winrunner. 
- The code may or may not contain a couple of comments. 
- The comments are inserted by the winrunner during the recording phase itself. 
- The second arguments or the numbers in the code represent the time lag between any two statements while the scripts are being recorded.
- Whenever a script is saved through winrunner, a folder by either a default name or name given by the user as the case may be is created which consists of a file by the name “script”. 
- This file consists of the default generated code in plain ASCII codes. 
- This code can be run in two modes namely the verify mode and the debug mode. 
- After this phase, comes the development mode in which the debug mode is used. 
- Finally, the verify mode is used. 
- The set_window() statement in the code lays down the focus on the window. - This helps in performing a particular function on a specific window. 
- When you type in the user id, it is identified by the winrunner and the edit_set statement is generated accordingly. 
- The next statement i.e., the  password_edit_set statement is similar to the edit_set statement, the only difference being that this field contains encrypted data and for security reasons is to be displayed as a series of asterisk “*” marks. 


Wednesday, August 22, 2012

When do you use Verify/Debug/Update Modes? (in Winrunner)


After you have developed the test scripts and finalized your test case, your next step is to run that particular test in order to check the behaviour of your software system or application. Whenever a test is executed using the winrunner, line by line the whole test is interpreted by the winrunner.  
As the TSL statements are interpreted line by line they are marked by an execution arrow which is visible in the left margin of the test script. As the test continues to be executed your software system or application is executed as if it is being controlled by a person. 

The winrunner provides 3 modes for the running your tests namely:
  1. Verify run mode
  2. Debug run mode
  3. Update run mode
In this article we talk about the above mentioned three different types of winrunner run modes. 
- The first mode i.e., the verify run mode checks the application. 
- The second one i.e., the debug run mode debugs the test scripts.
- The third one i.e., the update run mode updates the expected results. 
- Only two modes i.e., the debug run mode and the verify run mode are available when you are using the winrunner run-time.
- Any one of these modes can be chosen from the list on the test tool bar. 
- The verify run mode represents the default run mode in winrunner. 
- You can either run the entire test or just a portion of it using the test and debug menu commands. 
- But always make sure all the necessary GUI map files have been loaded before you start with a context sensitive test. 
- You also have the choice of running individual tests or a group of tests using a batch test. 
- Batch test seems to be quite useful when you have very long tests to be executed and you need an overnight run. 

Now we will discuss about all the three run modes in detail one by one:
Verify run mode: 
In this mode the current response of your software system or application is compared to the expected response by the winrunner.
- The results of this run mode are called verification results and enlist all the discrepancies that might have been observed in the current response and the expected response. 
- When the execution of the test stops, the verification results window is by default opened for the user to see. 
- As many sets as required of the verification results can be obtained. 
- However, you should always be ready with the expected results for the check points that you created earlier. 
- If there is any requirement for updating the expected results you just need to run the test in update mode.

Debug run mode: 
- This mode also helps in rooting out many of the bugs that might be residing in a test script. 
- The execution of a test in verify mode as well as debug is almost same, the only difference being in the folder in which the results are saved. 
- In this case the test results are saved in the debug folder. 
- Also, here only one set of debug results is stored and so the folder does not opens automatically for the user to view. 
- In this mode, the thing to be taken care of is that the time out variables must be changed to zero while the debugging 0of the test scripts take place.

Update run mode: 
- This mode helps in the updating of the expected results as well in the creation of a new expected results folder.
- Results for a GUI check point can also be updated and an additional set of expected results can also be created. 


Saturday, April 28, 2012

What is meant by production verification testing?


Production verification is also an important part of the software testing life cycle like the other software testing methodologies but is much unheard of! Therefore we have dedicated this article entirely to the discussion about what is production verification testing? 

This software testing methodology is carried out after the user acceptance testing phase is completed successfully. The production verification testing is aimed at the simulation of the cutover of the whole production process as close to the true value as possible. 

This software testing methodology has been designed for the verification of the below mentioned aspects:
  1. Business process flows
  2. Proper functioning of the data entry functions
  3. Proper running of any batch processes against the actual data values of the production process.

About Production Verification Testing


- Production verification testing can be thought of as an opportunity for the conduction of a full dress rehearsal of the changes in the business requirements if any. 
- The production verification is not to be confused from the parallel testing since there is a difference of the goal.
- We mean to say that the goal of the production verification testing is to verify that the data is being processed properly by the software system or application rather than comparing the results of the data handling of the new software system software or application with the current one as in the case of parallel testing. 
- For the production verification testing to commence, it is important that the documentation of the previous testings is produced and the issues and faults that were discovered then are fixed and closed.
- If there is a final opportunity for the determination of whether or not the software system or application is ready for the release, it is the production verification testing. 
- Apart from just the simulation of the actual production cut over, the real business activities are also simulated during the phase of the production verification testing. 
- Since it is the full rehearsal of the production phase and business activities, it should serve the purpose of the identification of the unexpected changes or anomalies presiding in the existing processes as a result of the production of the new software system or application which is currently under the test. 
- The importance of this software testing technique cannot be overstated in the case of the critical software applications.
- For the production verification testing, the testers need to remove or uninstall the software system or application from the testing environment and reinstall it again as it will be installed in the case of the production implementation.
- This is for carrying out a mock test of the whole production process, since such kind of mock tests help a lot in the verification of the interfaces, existing business flows. 
- The batch processes continue to execute alongside those mock tests. 
- This is entirely different from the parallel testing in which both the new and the old systems run besides each other.
- Therefore in parallel testing, the mock testing is not an option to provide accurate results for the data handling issues since the source data or data base has a limited access. 

Entry and Exit Criterion for Production Verification Testing


Here we list some of the entry and exit criteria of the production verification testing:
Entry criteria:
  1. The completion of the User acceptance testing is over and has been approved by all the involved parties.
  2. The documentation of the known defects is ready.
  3. The documentation of the migration package has been completed, reviewed and approved by all the parties and without fail by the production systems manager.
Exit Criteria:
  1. The processing of the migration package is complete.
  2. The installation testing has been performed and its documentation is ready and signed off.
  3. The documentation of the mock testing has been approved and reviewed.
  4. A record of the system changes has been prepared and approved.


Tuesday, December 13, 2011

What are different characteristics of performance testing?

Performance means a lot more than actually just testing the performance of a software system or application. It covers a wide range of concepts of software engineering and functionalities.

In performance testing, a software system is not merely tested on the basis of its functionalities, specifications and requirements but, it is also tested on the basis of the software system’s or application’s final performance characteristics which are measurable.

- Performance testing is both quantitative and qualitative kind of testing.
- In the field of software engineering, performance testing is typically done to determine the effectiveness and speed of a software system, hardware system, computer or device etc.

- Being a quantitative process, performance testing involves some lab tests like measurement of response time and MIPS (short form for “millions of instructions per second”) at which a software system performs.

- It also involves tests for testing the qualitative assets of a system like scalability, reliability and inter- operability.

- Often performance testing and stress testing are performed conjunction-ally.

- It’s a general kind of testing done to determine the behavior of a system whether hardware or software in the terms of stability and responsiveness when the system is provided with a significant workload.

- It is also carried out to measure, validate, verify and investigate the qualitative attributes of the system like resilience and resource usage.

- Performance testing is a sub category under performance engineering.
- It’s a kind of testing which aims to incorporate performance into the architecture and design of software or a hardware system.
- It’s basically done before the actual coding of the program.

Performance testing consists of many sub categories of testing. Few have been discussed in details below:

1.Stress testing:
This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e., to say above the maximum limit.

2. Load testing:
This is simplest of all the testings. This testing is usually done to check the behavior of the application software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.

3. Spike testing:
This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or it fails.

Endurance testing:
As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.

5.Isolation testing:
This test is basically done to check for the faulty part of the program or the application software.

6.Configuration testing:
This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:

- Tells us whether the application software meets the performance criteria or not.
- It can compare the performance of two application soft wares.
- It can find faulty parts of the program.


Thursday, November 24, 2011

What are differences between verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards.

Though verification and validation processes are frequently grouped together, there are plenty of differences between them:

- Verification is a process which controls the quality and is used to determine whether the software system meets the expected standards or not. Verification can be done during development phase or during production phase. In contrast to this, validation is a process which assures quality. It gives an assurance that the software artifact or the system is successful in accomplishing what it is intended to do.

- Verification is an internal process whereas validation is an external process.

- Verification refers to the needs of the users while validation refers to the correctness of the implementation of the specifications by the software system or application.

- Verification process consists of following processes: installation, qualification, operational qualification, and performance qualification whereas Validation is categorized into:
prospective validation
retrospective validation
full scale validation
partial scale validation
cross validation
concurrent validation


- Verification ensures that the software system meets all the functionality whereas validation ensures that functionalities exhibit the intended behavior.

- Verification takes place first and then validation is done. Verification checks for documentation, code, plans, specifications and requirements while validation checks the whole product.

- Input for verification includes issues lists, inspection meetings, checklists, meetings and reviews. Input for validation includes the software artifact itself.

- Verification is done by developers of the software product whereas validation is done by the testers and it is done against the requirements.

- Verification is a kind of static testing where the functionalities of a software system are checked whether they are correct or not and it includes techniques like walkthroughs, reviews and inspections etc. In contrast to verification, validation is a dynamic kind of testing where the software application is checked against its proper execution.

- Mostly reviews form a part of verification process whereas audits are a major part of validation process.

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Wednesday, November 23, 2011

What are different methods of verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards. There are various methodologies for verification different kinds of data in software applications. The different methods have been discussed below:

- File verification
It is used to check the integrity and the level of correctness of file. It is used to detect errors in the file.
- CAPTCHA
It is a kind of device that is used to verify that the user of the website is a human being and not some false program intended to hamper the security of the system.
- Speech verification
This kind of verification is used to check the correctness of the spoken statements and sentences.
- Verify command in DOS.

Apart from verification techniques for software applications there are several other techniques for verification during the development of software. They have been discussed below:

- Intelligence verification
This type of verification is used to adapt the test bench changes to the changes in RTL automatically.
- Formal verification
It is used to verify the algorithms of the program for their correctness by some mathematical techniques.
- Run time verification
Run time verification is carried out during execution. It is done to determine if the program is able to execute properly and within the specified time or not.
- Software verification
This verification type uses several methodologies for the verification of the software.

There are several other techniques used for verification in circuit development. - Functional verification
- Physical verification
- Analog verification

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Tuesday, March 29, 2011

Formal Technical Review - Fagan's Inspection Method

Fagan's Inspection Method is introduced by Fagon. Apart from checking codes of programs,it is used to check other work products such as technical documents, model elements, data and code design etc. It follows certain procedural rules that each member should follow:
- The time limit for an inspection meeting is for two hours.
- Inspections are led by a trained moderator.
- Inspections are carried out at a number of points in the process of project planning and systems development.
- All classes of defects in documentation and work product are inspected.
- Inspection is carried out by colleagues at all levels of seniority except the big boss.
- Inspectors are assigned specific roles to increase effectiveness.
- Statistics on types of errors are key, and used for reports which are analyzed in a manner similar to financial analysis.

Different activities that are involved in conducting inspections are:
- Planning is very important and in this case the moderator is asked to build up a plan.
- Presentation should be given which gives an overall overview.
- Each inspector is given 1 to 2 hours alone to inspect the workproduct.
- Meeting should be held in which participants of the meeting are the inspectors, moderator and the developer of the work product.
- The defect list is given for repair.
- Follow up with the repair work.
- Casual analysis meeting is held where inspectors are given a chance to express their personal view on errors and improvements.


What are Formal Technical Reviews (FTR)? What is the aim and guidelines for formal technical reviews?

When tasks are performed in software process, the result is a work product. These results contribute to the development of quality software.
A formal technical review (FTR) is a software quality assurance activity performed by software engineers with the following objectives:
- uncover errors in function, logic or implementation of the software.
- verify that the software meets its requirements.
- ensure that the software has been developed according to the standards.
- achieve uniform software.
- make projects manageable.

Each formal technical review is conducted as a meeting and is considered successful only if it is properly planned, controlled and attended.
The purpose of formal technical review serves as a training ground for junior engineers and to promote backup and continuity.
Constraints of formal technical review meeting’s include 3-5 people involvement, advanced preparation not more than 2 hours for each person, the duration of the review meeting should be less than 2 hours and focus on a specific part of a software product.

There are few guidelines while conducting formal technical reviews. They are:
- Work product should be reviewed and not the developer.
- Make a practice to write down notes while conducting reviews.
- Agenda should be planned.
- Minimize the debate and discussions.
- Keep the number of participants to a minimum and insist on preparing for the review.
- The defect areas should be pointed but no solution should be provided.
- A checklist that is to be reviewed is provided.
- Schedule the reviews as part of the software process and ensure that resources are provided for each reviewer
- Check the effectiveness of review.


Wednesday, December 15, 2010

Overview of Reporting on response time at various levels of load, Fail-over Tests, Fail-back Testing

REPORTING ON RESPONSE TIME AT VARIOUS LEVELS OF LOAD
Expected output from a load test often includes a series of response time measures at various levels of load. It is important when determining the response time at any particular level of load, that the system has run in a stable manner for a significant amount of time before taking measurements.
For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize. Taking measurements over the next ten minutes would then give a meaningful result. The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.

FAIL-OVER TESTS
Failover tests verify of redundancy mechanisms while the system is under load. This is in contrast to load tests which are conducted under anticipated load with no component failure during the course of a test. For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies.
Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outrage. It also provides a baseline of failover capability so that a sick server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load.

FAIL-BACK TESTING
After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.


Monday, October 4, 2010

Verification Strategies - Overview of Inspections

Inspections are static analysis techniques that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include :
- code inspection
- design inspection
- architectural inspections
- test ware inspections

The participants in inspections include inspection leader, recorder, reader, author, inspector. All participants in the review are inspectors. The author should not act as an inspection leader, reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role. Individuals holding management positions over nay member of the inspection team shall not participate in the inspection.

Input Criteria includes:
- Statement of objectives for the inspection.
- The software product to be inspected.
- Documented inspection procedure.
- Inspection reporting forms.
- Current anomalies or issues list.
- Inspection checklists.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Hardware product specifications.
- Hardware performance data.
- Anomaly categories.
The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.

The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate re-work and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
- Accept with no or minor re-work : The software product is accepted as is or with only minor re-work.
- Accept with re-work verification : The software product is to be accepted after the inspection leader or a designated member of the inspection team verifies re-work.
- Re-inspect : Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection.


Thursday, September 30, 2010

Verification Strategies - Reviews - Technical Reviews and Requirement Review

Technical reviews confirm that product conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification.
The main objectives of technical reviews are as follows:
- Ensure that the software confirms to the organization standards.
- Ensure that any changes in the development procedures are implemented as per the organization pre-defined standards.

In technical reviews, the following software products are reviewed:
- Software requirements specification.
- Software design description.
- Software test documentation.
- Software user documentation.
- Installation procedure.
- Release notes.
The participants of the review play the roles of decision-maker, review leader, recorder, technical staff.

Requirement Review : A process or meeting during which the requirements for a system, hardware item or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review. Product management leads the requirement review. Members from every affected department participates in the review.

Input Criteria: Software requirement specification is the essential document for the review. A checklist can be used for the review.
Exit Criteria: It includes the filled and completed checklist with the reviewers comments and suggestions and the re-verification whether they are incorporated in the documents.


Facebook activity