Subscribe by Email

Saturday, July 31, 2010

High-level Best Practice Six(6) in Software Configuration Management

There are six general areas of SCM deployment, and some best practices within each of those areas. The first five areas and there practices are already discussed.


- Track change packages. Even though each file in a codeline has its revision
history, each revision in its history is only useful in the context of a set of related files. Change packages, not individual file changes, are the visible manifestation of software development. Some SCM systems track change packages for you; if yours doesn’t, write an interface that does.
- In order to make propagating logical changes from one codeline branch to another easy, tracking change packages has a benefit. However, it’s not enough to simply propagate change packages across branches; you must keep track of which change packages have been propagated, which propagations are pending, and which codeline branches are likely donors or recipients of propagations.
- SCM process should be able to distinguish between "What to do" and "What was done".
- Every process, policy, document, product, component, codeline, branch, and task in your SCM system should have an owner. Owners give life to these entities by representing them; an entity with an owner can grow and mature.
- The policies and procedures you implement should be described in living documents; that is, your process documentation should be as readily available and as subject to update as your managed source code.

High-level Best Practice Five(5) in Software Configuration Management

There are six general areas of SCM deployment, and some best practices within each of those areas. The first, second, third and fourth area and there practices are already discussed.


Builds are necessary to construct software from source files. The high level practices involved with this workspace are :
- In a build, the only inputs that are needed are the source files and the tools that are used. There is no place for procedures or yellow sticks. If setup procedures are there, automate them in scripts. If manual setup is there, document them in build instructions.
- There are chances that the input list is incomplete if the software cannot be build from the same inputs. So, the input list needs to be checked.
- When you are organizing original source files in a directory, you need to ensure that already built objects are kept away and do not contaminate the source files. Built objects (those files that get created during the building process) should be located in a different directory, away from the original source files (which are the files that have been created through a tool such as a code editor, or through notepad, or through code generation tools).
- Developers, test engineers, and release engineers should all use the same and easily available build tools.
- Build should be done often as end-to-end builds with regression testing (“sanity” builds) reveal integration problems introduced by check-ins and they produce link libraries and other built objects that can be used by developers.
- Build outputs and logs, including source file versions, tools and OS version info, compiler outputs, intermediate files, built objects, and test results should be kept for future reference.

Friday, July 30, 2010

High-level Best Practice Four(4) in Software Configuration Management

There are six general areas of SCM deployment, and some best practices within each of those areas. The first, second and third area and there practices are already discussed.

Change Propagation

Propagating file changes across branches needs to be managed. The practices involved with this work space are :
- Do not delay to propagate a change. When it’s feasible to propagate a change from one
branch to another (that is, if the change wouldn’t violate the target branch’s
policy), do it sooner rather than later.
- It is much easier to merge a change from a file that is close to the common
ancestor than it is to merge a change from a file that has diverged considerably.
This is because the change in the file that has diverged may be built upon
changes that are not being propagated, and those unwanted changes can
confound the merge process.
- Changes can be propagated by the owner of the target files, the person who make the original changes, or someone else.

Wednesday, July 28, 2010

High-level Best Practice Three(3) in Software Configuration Management

There are six general areas of SCM deployment, and some best practices within each of those areas. The first and second areas and there practices are already discussed.

In Software Configuration Management (SCM) systems, branching allows development to proceed simultaneously along more than one path while maintaining the relationships between the different paths. A branching strategy consists of the guidelines within an environment for the creation and application of codeline policies. There are different tools that support branching Creating a branching strategy consists of :
- identifying the categories of development that can be easily characterized,
- defining the differences and similarities between them,
- defining how they relate to each other, and
- expressing all of this information as codeline policies and branches.

High Level Practices associated with branching workspace are :
- Branch only when necessary.
- Don’t copy when you mean to branch.
- Branch on incompatible policy.
- To minimize the number of changes that need to be propagated from one branch to another, put off creating a branch as long as possible.
- Branch instead of freeze.

Tuesday, July 27, 2010

High-level Best Practice Two(2) in Software Configuration Management

There are six general areas of SCM deployment, and some best practices within each of those areas. The first area and its practices are already discussed.

The Codeline
It is a set of source programs or files required to develop a software. Codelines are branched, and the branches evolve into variant codelines embodying different releases.
The best practices are :
- Give each codeline a policy : It is the essential user’s manual for codeline SCM. The policy of the development codeline should state that it is not for release or the policy of the release codeline should state that it is not for testing or bug fixing. It specifies the check-ins for the codeline.
- Give each codeline an owner : After the policy is defined for a codeline, there would be situations where the policy is inapplicable or ambiguous. When the developers face these ambiguities, they will turn to the person who is in charge of the codeline. There has to be a owner of the codeline because if there would be no owner, the developer will create their own workarounds without documenting them.
- Have a mainline : A mainline is the branch of a codeline that evolves forever. A mainline provides an ultimate destination for almost all changes – both maintenance fixes and new features – and represents the primary, linear evolution of a software product.

Monday, July 26, 2010

High-level Best Practice One(1) in Software Configuration Management

Software configuration management (SCM) is a set of activities that are designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling changes that are imposed, and auditing and reporting on the changes that are made.
Software Configuration Management Best Practices are the techniques, policies and procedures for ensuring the integrity, reliability and reproducibility of developing software products.
When implementing SCM tools and processes, you must define what practices and policies to employ to avoid common configuration problems and maximize team productivity. There are six areas of SCM deployment, and some coarse-grained
best practices within each of those areas.
- Workspaces: It is the area where the engineers edit source files, build the software components they’re working on, and test and debug what they have built. The best practices for workspaces include:
. Don’t share workspaces.
· Don’t work outside of managed workspaces.
· Don’t use jello views: A file in your workspace should not change unless you
explicitly cause the change. A “jello view” is a workspace where file changes are
caused by external events beyond your control.
· Stay in sync with the code line.
· Check in often: Integrating your development work with other peoples’ work also
requires you to check in your changes as soon as they are ready.

Saturday, July 24, 2010

Load test Process and why to use JMeter ?

Why to use JMeter

Easy to install and use Free!
- Java — most platforms.
- GUI vs. command line.
- Just download and run.
Feature-rich Post forms
- Record from browser.
- Load test data from files.
- Add logic, variables & functions.
- Run one test from multiple machines.
- Test many protocols, not just HTTP.

Load Test Process

- System Analysis : Enable customers to convert their goals and requirements into a successful test script. You can create your own test Script consistently and carefully UI design allows you to create your own test easily.
- Creating Virtual User Scripts: Tester must emulate the real user by driving the real application as a client. JMeter support this by adding thread group element. This would tell the number of users you want to simulate, how often the users send requests, how many requests they send, what request (FTP Request, HTTP Request, and JDBC request) and validate that your application is returning the results you expect.
- Defining User Behavior : JMeter allows you to simulate human actions more closely by controlling how long JMeter engine delays between each sample so you can define the way that the script runs.
- Creating a Load Test Scenario : JMeter offers support for this step by assigning scripts to individual virtual user, tester can define any number of virtual users needed to run the tests, allowing the user to simulate concurrent connections to server application, enabling user to create multiple threads ( virtual user) executing different test plan and increasing the number of virtual users in a controlled fashion.
- Running Load Test Scenario : With JMeter you can run your tests in a very easy way.
- Analyzing Results : JMeter offers support for this step by:
. displaying the data visually (Graph Results).
. save data in file.
. allows user to see one multiple views of the data.
. displays the response from the server.
. shows the URL of each sample taken.
. listeners will show different sets of data.
. it can send email based on test results.

Friday, July 23, 2010

Apache JMeter : Important Features and Capabilities

Due to the immense popularity of web-based applications, testing these applications have gained a lot of importance. Performance testing of applications, such as e-commerce, web services etc. is of paramount importance as multiple users access the service simultaneously.
- The Apache JMeter is an open source testing tool used to test the performance of the application when it is under heavy load.
- It puts heavy load on the server, tests the performance and analyzes the results when many users access the application simultaneously.
- Apache Jmeter is java application designed to load test the application.
- Jmeter is one of the Java tools which is used to load testing client/server applications.
- Jmeter is one of the Java tools which is used to load testing client/server applications.
- The important functionalities of Jmeter is that a heavy load on a server can be stimulated by using it, not on a server but also a heavy load on a network or object to test its strength under different load types.
- JMeter can be used as a unit test tool for JDBC database connection, FTP, LDAP, WebServices,J MS, HTTP and generic TCP connections.
- JMeter can also be configured as a monitor, although this is typically considered an ad-hoc solution in lieu of advanced monitoring solutions.

Thursday, July 22, 2010

Drawbacks of Manual Performance testing and how does a LoadRunner work ?

Working of LoadRunner

- The Controller is the central console from which the load test was managed and monitored.
- Thousands of virtual users perform real-life transactions on a system to emulate traf c.
- Real time monitors capture performance data across all tiers, servers and network resources and display information on the Controller.
- Results are stored in the database repository, allowing users to generate the reports and perform analysis.

The LoadRunner automated solution addresses the drawbacks of manual performance testing:
• LoadRunner reduces the personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users operating real applications.
• Because numerous Vusers can run on a single computer, LoadRunner reduces the hardware requirements.
• The LoadRunner Controller allows you to easily and effectively control all the Vusers from a single point of control.
• LoadRunner monitors the application performance online, enabling you to fine-tune your system during test execution.
- LoadRunner automatically records the performance of the application during a test. - LoadRunner checks where performance delays occur: network or client delays, CPU performance, I/O delays, database locking, or other issues at the database server.
- LoadRunner monitors the network and server resources to help you improve

Wednesday, July 21, 2010

Test Scripting Language : 4Test used in SilkTest

SilkTest is Segue Software’s offering for automated functional testing. It supports the testing of client-server and browser-based applications, as well as standalone programs.
- In SilkTest, users modify automated test scripts using a proprietary, object-based programming language called 4Test.
- 4Test is an object oriented 4GL language and offers sufficient built in functionality to ease our life.
- SilkTest scripts can also directly access databases to execute SQL commands and perform verification of database content.

There are basically four classes to which the methods belong:
- AgentClass
- AynWin
- ClipboardClass
- CursorClass
Important Methods in test scripts are :
- Accept (): closes the dialog box and accepts values specified there.
- ClearText (): removes all text from text field.
- Click (): Clicks a mouse button on the push button.
- Close (): Closes the window.
- Close Windows (): Closes all windows of the application except main.
- Exit (): Terminates the application.
- GetActive (): Returns the active window in the application.
- GetCloseWindows (): Returns the windows that must close to return the application to base state.
- GetFocus (): Returns the control with the input focus.
- Invoke (): Invokes the application.
- SetActive (): Makes the window active.
- Wait (): Waits for the specified cursor and returns the value of elapsed time.
- VerifyActive (): Verifies that the window is active.
- Start (): Invokes the application and waits for the main window to appear.
- Size (): Resizes the window.

Tuesday, July 20, 2010

Parts and General Syntax Rules of Test Script Language (TSL)

TSL stands for Test Script Language. It is created by Mercury Interactive.

Syntax Rules of TSL

- Semi‐colons mark the end of a simple statement.
- Statements that contain other statements are called compound statements. Compound statements use curly braces and not semi‐colons.
- TSL is a case sensitive language. You need to be extra careful with the case of the statements. Most of the identifiers, operators and functions within the language utilize lower case so use lower case when you are not sure.

What Constitutes TSL?

Comments :
Use a # symbol to write a comment. Statements written as comments are ignored by the interpreter. It makes easier for the reader of the test script with useful information about the test. TSL does not have multi‐line comments.
Naming Rules :
Every language has rules that define what names are acceptable in the language and which names are not. Naming rules of TSL are :
- Must begin with a letter or the underscore.
- Cannot contain special characters except the underscore.
- Cannot match a reserved word.
- Must be unique within a context.
- Case Sensitive.
Data Types :
When a script is executing, different kinds of data gets stored. TSL simply uses the context in which we access the data to determine what the data type is. There are two data types in TSL:
- String : stores alphanumeric data.
- Number : Stores numeric data.
Data Storage :
This refers how information is stored in the test during test execution. TSL uses three different types of storage vehicles. These include :
- Variables : for dynamic storage.
- Constants : for fixed values.
- Arrays : for sets.
Variables :
Variables are used to capture the information that is to be needed later to be stored temporarily. In TSL, variable declaration is optional so you can create a variable simply by using the variable in your code. The only exception to this is in functions.
Constants :
Constants are very similar to variables because they store values also. The major difference between these two is that the value of a constant is fixed and cannot be changed during code execution.
Arrays :
Arrays are used to store several pieces of related data such as names. TSL provides a better mechanism for use when dealing with a collection of similar items. This concept, known as arrays, allows you to store several values of related information into the same variable.
Operations :
Operations are the many forms of computations, comparisons etc that can be performed in TSL. The symbols used in an operation are known as operators and the values involved in these operations are known as operands.
Branching :
Branching statements allow your code to avoid executing certain portions of code unless a specified condition is met. TSL uses two kinds of branching statements i.e. the if statement and the switch statement.
Loops :
Loops provide the ability to repeat execution of certain statements. These statements are repeatedly executed until an initial condition evaluates to false. TSL supports four different types of loops and these are the while, do..while, for and loops.
Functions :
Functions are blocks of code containing one or more TSL statements. These code blocks are very useful in performing activities that need to be executed from several different locations in your code. TSL provides many built in functions as well as provides you the ability to create your own user‐defined functions.

Monday, July 19, 2010

Some reasons why test automation projects fail, and some precautions to take to reduce these chances

One of the most important reasons why test automation projects fail is because not enough planning is give for such projects. In so many organizations, people talk about automation, decide that bringing in software automated testing will solve things for them. One of the prime reasons is that there is a drive towards bringing in software testing automation as a solution, looking at the benefits, but not keeping in mind that an organization needs to also plan carefully for software automation.
So, for example, when a company brings in software automation, it would need to do the following steps, else, it will face problems during the implementation, and could actually reach a situation where the organization considers that the project is a failure.
- Plan the required resources that you need for implementation of software automation, which can be different from the profile needed for testing purposes
- In the initial stages, the amount of effort needed for software testing automation would be more, since the company needs to keep at the regular manual testing and also start building up the effort needed to do the automation of its test cases and test plans
- In many cases, there is a piecemeal implementation, in terms of doing conversion of cases one by one to become automated, but a comprehensive framework is not employed for this purpose. Without this level of framework implementation, as the amount of automation increases, the maintenance of such test cases becomes more complicated and intensive
- In many cases, the organizations have not really thought through the needs for implementation in terms of creating a new sub-structure within the testing team for automation, and this creates strains since the black box testing and automation teams have different needs
- For doing automation, there is a need to even modify existing testing processes (including the plan for creating test plans and test cases), so that creating automation becomes an intrinsic part of the testing process
- Set expectations of the executives with respect to the time frames and effort needed for automation; I know a case where the senior manager wanted the implementation of an automation project without increases in resources and without existing testing getting impacted. This is an impossible task.

Sunday, July 18, 2010

Test Script Languages (TSL)

Test Script Language(TSL) is a scripting language with syntax similar to C language. TSL is the script language used by WinRunner for recording and executing scripts.

Features Of TSL :
- The TSL language is very compact containing only a small number of operators and keywords.
- TSL is a script language and as such, does not have many of the complex syntax
structures you may find in programming languages.
- On the other hand, TSL as a script language has considerably less features and capabilities than a programming language.
- Comments : Allows users to enter human readable information in the test script that will be ignored by the interpreter.
- Naming Rules : Rules for identifiers within TSL.
- Data Types : The different types of data values that are supported by the language.
- Data Storage : Constructs that can be used to store information.
- Operations : Different type of computational operations.
- Branching : Avoids executing certain portions of the code unless a condition is met.
- Loops : Repeats execution of a certain section of code.
- Functions : Group of statements used to perform some useful functionality.

There are four categories of TSL functions. Each category of functions is for performing specific tasks. These categories are as follows:
- Analog Functions
These functions are used when you record in Analog mode, a mode in which the exact coordinates of the GUI map are required. When you record in Analog mode, these functions are used to depict mouse clicks, keyboard input, and the exact coordinates traveled by the mouse. The various analog functions available are Bitmap Checkpoint Functions, Input Device Functions, Synchronization functions, Table Functions, Text Checkpoint Functions.
- Context Sensitive Functions
These functions are used where the exact coordinates are not required. In Context Sensitive mode, each time you record an operation on the application under test (AUT), a TSL statement is generated in the test script which describes the object selected and the action performed.
- Customization Functions
These functions allow the user to improve the testing tool by adding functions to the Function Generator. The various customization functions are custom record functions,
custom user interface functions, function generator functions and GUI checkpoint functions. Different context-sensitive functions are Active Bar Functions, ActiveX/Visual Basic Functions, Bitmap Checkpoint Functions, Button Object Functions
Calendar Functions,Database Functions, Data – driven test Functions,GUI related Functions etc.
- Standard Functions
These functions include all the basic elements of programming language like control flow statements, mathematical functions, string related functions etc. The various standard functions are arithmetic functions, array functions, call statements, compiled module functions, I/O functions, load testing functions, operating system functions, etc.

Saturday, July 17, 2010

What can be white box testing used for, tools used for white box testing.

White box testing (WBT) is also called Structural or Glass box testing. It deals with the internal logic and structure of the code. A software engineer can design test cases that exercise independent paths within a module or unit, exercise logical decisions on both their true and false side, execute loops at their boundaries and within their operational bounds and exercise internal data structures to ensure their validity. White Box testing can be used for :
- looking into the internal structures of a program.
- test the detailed design specifications prior to writing actual code using the static analysis techniques.
- organizing unit and integration test processes.
- testing the program source code using static analysis and dynamic analysis techniques.

Tools used for White Box testing:
- Provide run-time error and memory leak detection.
- Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks.
- Pinpoint areas of the application that have and have not been executed.

The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively.

Friday, July 16, 2010

QuickTest Professional : Phases of QuickTest Testing Process

Automated testing with QuickTest addresses these problems by dramatically
speeding up the testing process. The QuickTest testing process consists of 7 main phases:
Phase 1 :Preparing to record
Before you record a test, confirm that your application and QuickTest are set
to match the needs of your test. The application should display elements that you want to record.
Phase 2 :Recording a session on your application
QuickTest graphically displays each step you perform as a row in the Keyword View as you navigate through the application.
Phase 3 :Enhancing your test
- To test whether application is running correct, insert checkpoints that lets you search a specific value of page, object, or text string.
- Broadening the scope of your test, by replacing fixed values with parameters, lets you check how your application performs the same operations with multiple sets of data.
- Adding logic and conditional or loop statements enables you to add sophisticated checks to your test.
Phase 4: Debugging your test
Test is debugged to ensure that it is operating smoothly.
Phase 5: Running your test
Test is run to check the behavior of the application or website. QuickTest opens the application, or connects to the Web site, and performs each step in your test.
Phase 6: Analyzing the test results
Test results are examined to pinpoint defects in your application.
Phase 7: Reporting defects
Defects recovered can be reported to a database if the Quality Center is installed.

Thursday, July 15, 2010

Overview of QuickTest Professional and its features

Mercury Interactive's QuickTest Professional (QTP) is a very sophisticated testing tool for carrying out functional/regression testing of a variety of applications and very easy to learn and use. Learning QTP becomes easy if you already know WinRunner. QTP is much more powerful and one can migrate to it very easily.
It is used to generate various test cases and run them automatically. Its important features are :
- It has the record/replay provision to record the user interactions with the application software. You can record your keyboard entries and mouse clicks of the application GUI. QTP automatically generates the test script. Test script can be run repeatedly for regression testing of your application.
- This testing tool has a recovery manager and in case the application halts due to an error, it will automatically recover and this is very useful for unattended testing.
- QTP provides checkpoints option.
- It uses VB script as the scripting language and its syntax is very similar to Visual Basic, hence learning this scripting language is very easy.
- It provides a facility for synchronization of test cases.
- Its auto-documentation feature provides the feature for creating test documentation.
- Test report data is stored in documented XML format. This facilitates transferring the report data to another third party tool or into HTML web page.
- It supports Unicode and hence you can test applications written for any of the world languages.
- USing special add-in modules, QTP can be used for testing a variety of applications such as :

ERP/CRM packages such as SAP, Siebel, PeopleSoft, Oracle.
.NET WebForms, WinForms, .Net Controls.
.Web service applications and protocols including XML, SOAP, WSDL, J2EE and .NET.
Multimedia applications such as RealAudio/Video and Flash.

Wednesday, July 14, 2010

The four phases of testing management process in Test Director - Part II

TestDirector offers an organized framework for testing applications before
they are deployed. While using TestDirector, the testing management process can be defined using the following four steps :

- Specifying the test requirements.
- Planning the tests.
- Run the tests in manual or automatic mode.
- To analyze the defects.

Run the tests in manual or automatic mode

This phase is the most crucial phase of testing process. A test set is a group of tests in a TestDirector project designed to achieve specific testing goals. TestDirector enables you to control the execution of tests in a test set by setting conditions and scheduling the date and time for executing your tests. After you define test sets, you can begin to execute your tests. When you run a test automatically, TestDirector opens the selected testing tool, runs the test, and exports the test results to TestDirector. It includes defining test sets, adding tests to a test set, scheduling test runs, running tests manually, running tests automatically.

To analyze the defects

Locating and repairing defects is an essential phase in application development. When a defect is submitted it is tracked through the new,open,fixed,and closed stages. A defect may also be rejected or reopened after it is fixed. It includes :
- How to Track Defects
- Adding New Defects
- Matching Defects
- Updating Defects
- Mailing Defects
- Associating Defects with Tests
- Creating Favorite Views

Tuesday, July 13, 2010

Product Cycle: Setting a milestone to freeze all the UI elements of your product

In a typical Product Development model, the focus is on getting the newer features into the product. So, in the normal model, you have a list of new features that is decided in collaboration with Product Management and these are implemented over a period of time. The timeframe for this implementation can vary depending on the type of product development model being followed such as Scrum, Iterative, Waterfall, etc.
However, whatever be the cycle you may be following, it is critically important to have a specific milestone in your entire schedule where you no longer make a change to the UI components of your product. This could mean the layout of dialogs, the artwork such as icons and others used in your application, the error messages and text on dialogs, and any other such UI elements.
And why do you need to have such a milestone in your schedule ? Aren't you limiting the amount of changes you can do if you need to have such a time when you can longer make changes ? Well, there are certain dependencies that most products have. For product being sold internationally, there is a need to ensure that your product is translated into the languages in which it is being sold, and to do this translation and testing of the different language product takes time; now if you keep on changing some elements of your UI, this process of getting your product converted into an international version will become very inefficient and costs will increase, so the simpler solution is to ensure that you freeze your UI elements at some time, which gives enough time for the translation, and also allows the product QE to get a final set of specifications and design that they can test against.

Saturday, July 10, 2010

The four phases of testing management process in Test Director - Part I

TestDirector offers an organized framework for testing applications before
they are deployed. While using TestDirector, the testing management process can be defined using the following four steps :

- Specifying the test requirements.
- Planning the tests.
- Run the tests in manual or automatic mode.
- To analyze the defects.

Specifying the test requirements

Testing process is started by specifying the testing requirements in TestDirector’s Requirements module. It provides the test team with the foundation on which the entire testing process is based. A requirement tree is created to define the requirements. This is a graphical representation of your requirements specification,
displaying your requirements hierarchically. After you create tests in the Test Plan module, you can link requirements to tests. A track of your testing needs at all stages of the testing process can be kept. It includes defining, viewing, modifying and converting requirements.

Planning the tests

Testing goals can be determined after the requirements are defined. After this a test plan tree is built to divide the application into testing units or subjects. Tests are defined for each subject. Actions are specified for each test step that are performed on application. Links can be added to keep track of the relationship between your tests and your requirements. After you design your tests, you can decide which tests to automate and after that a test script can be generated. It includes developing a test plan tree, designing test steps, copying test steps, calling tests with parameters, creating and viewing requirements coverage and generating automated test scripts.

What are the four phases of TestDirector ?

TestDirector use can be divided into four phases:
- Test Requirements Management
Requirements Manager is used to link the requirements with the tests to be carried out. Each requirement in the SRS has to be tested at least once. In SRS, the functional and performance requirements are specified. Functional requirements are generated from use-case scenarios. Performance requirements are dependent on the application.
- Test Planning
In test planning, the QA manager does a detailed planning and addresses the following issues:
- Hardware and software platforms.
- Various tests to be formed.
- Time schedule to conduct the tests.
- Roles and responsibility of the persons associated with the project.
- Procedure for running the test.
- Various test cases to be generated.
- Procedure for tracking the progress of testing.
- Documents to be generated during testing process.
- Criteria for completion of testing.
Test design is done during test planning phase which involves defining the sequence of steps to execute a test in manual testing. The test plan is communicated to all the test engineers and also the development team.
- Test Execution
The actual testing is carried out based on the test cases generated, either manually or automatically. In case of automated testing, the test scheduling is done as per plan. A history of all test runs is maintained and audit trail, to trace the history of tests and test runs, is also maintained. Test sets are created. A test set is a set of test cases. In addition execution logic is also set. The logic specifies what to do when a test fails in a series of tests.
- Test Results Analysis
In this phase, test results are analyzed i.e. which tests passed and which failed. For the tests that failed, an analysis is carried out as to why they failed. Bug is classified based on severity. A simple way of classification is critical, major, minor. A more detailed way of classification is:
- Cosmetic or GUI related.
- Inconsistent performance of application.
- Loss of functionality.
- System crash.
- Loss of data.
- Security Violation.
The bug report is stored in a database. The privileges to read, write, and update the database need to be decided by the QA manager. Based on the bug tracking and analysis tools, QA manager and the project manager can take the decision whether the software can be released to the customer or still more testing is required.

Friday, July 9, 2010

Test Director : Important Features and Capabilities of test management tools

To deliver quality product, the testing process has to be managed efficiently. Testing management tools are very useful in such situations. Mercury Interactive's TestDirector is an excellent tool for managing the testing process efficiently. It is a web-based tool so it is very easy to use. Even if the development team and the testing team are located at different locations, the testing process can be managed very effectively.

Features of TestDirector
- A web-based tool and hence it facilitates distributed testing.
- As testing the software is linked to the requirements of the software, it provides the feature of linking the software requirements to the testing plan.
- It provides the features to document the testing procedures.
- It provides the feature of scheduling the manual and automated tests and the testing can be done during nighttimes or when system load is less.
- Keeps the history of all test runs.
- The audit trail feature allows keeping track of changes in the tests and test runs.
- It provides the feature of creating different users with different privileges.
- It keeps a log of all defects found and the status of each bug can be changed by authorized persons only.
- It provides the features of setting groups of machines to carry out testing.
- It generates test reports and analysis for the QA manager to decide when the software can be released in market.

Thursday, July 8, 2010

What are different terminologies used in LoadRunner ?

Application performance testing requirements are divided into scenarios using LoadRunner.
- A scenario defines the events that occur during each testing sessions.
- A scenario defines and controls the number of users to emulate, the actions that they perform, and the machines on which they run their emulations.

LoadRunner works by creating virtual users who take the place of real users operating client software. LoadRunner works by creating virtual users who take the place of real users operating client software. Vusers emulate the actions of human users working with your application. A scenario can contain tens, hundreds, or even thousands of

Vusers Scripts
The actions that a Vuser performs during the scenario are described in a
Vuser script. When you run a scenario, each Vuser executes a Vuser script. Vuser scripts include functions that measure and record the performance of the server during the scenario.

Transactions are defined to measure the performance of the server. Transactions measure the time that it takes for the server to respond to tasks submitted by Vusers.

Rendezvous Points
Rendezvous points are inserted into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct multiple Vusers to perform tasks at exactly the same time.

LoadRunner Controller is used to manage and maintain your scenarios. Using the Controller, you control all the Vusers in a scenario from a single workstation.

The LoadRunner Controller distributes each Vuser in the scenario to a host when the scenario is executed. The host is the machine that executes the Vuser script, enabling the Vuser to emulate the actions of a human user.

Performance Analysis
Vuser scripts include functions that measure and record system performance during load-testing sessions. During a scenario run, you can monitor the network and server resources. Following a scenario run, you can view performance analysis data in reports and graphs.

Wednesday, July 7, 2010

Overview of Mercury Interactive's LoadRunner and its Components

To test the performance requirements such as transaction response time of a database application or response time in the case of multiple users accessing a web site, LoadRunner is an excellent tool that reduces the infrastructure and manpower costs.
Mercury Interactive's LoadRunner is used to test the client/server applications such as databases and websites. Using LoadRunner, with minimal infrastructure and manpower, performance testing can be carried out.
- LoadRunner simulates multiple transactions from the same machine and hence it creates a scenario of multiple simultaneous access to to the application. So, instead of t=real users, virtual users are simulated. With virtual users simultaneously accessing the application, LoadRunner accurately measures and analyzes the performance of the client/server application.
In LoadRunner, we divide the performance testing requirements into various scenarios. A scenario is a series of actions that are to be tested. LoadRunner creates virtual users. The Vusers submit the requests to the server. Vuser script is generated and this script is executed for simulating multiple users.

LoadRunner Components

LoadRunner contains the following components:
- The Virtual User Generator captures end-user business processes and creates
an automated performance testing script, also known as a virtual user script.
- The Controller organizes, drives, manages, and monitors the load test.
- The Load Generators create the load by running virtual users.
- The Analysis helps you view, dissect, and compare the performance results.
- The Launcher provides a single point of access for all of the LoadRunner

Tuesday, July 6, 2010

Overview of SQA Robot and its features.

IBM Rational SQA Robot is a powerful functional/regression testing tool. SQA Robot is a part of the test suit that contains :
- SQA Manager, to manage testing processes.
- SQA Robot, for functional/regression testing of applications written in VB, Delphi, C++, Java etc, ERP packages as well as using Integrated Development Environments (IDEs) such as Visual Studio, Visual Age, JBuilder.
- Load Test, to test networking and web applications.
- SiteCheck, to check websites.
- PurifyPlus, to check code coverage for C and C++ and to analyze the performance of the code as well as to detect bottlenecks in the code.
SQA Robot can be used to record the test cases. SQA Robot automatically generates a test script which can be stored and executed. Test cases can be synchronized and also introduce GUI and database checkpoints.

Synchronization of Test Procedures

SQA Robot takes the delay time as 20 seconds i.e. it automatically waits for 20 seconds before executing the next statement in the test procedure. Hence, SQA Robot automatically implements the process of synchronization. Delay time can be changed manually. If the delay time is changed manually, SQA Robot waits for the specified time before executing the next statement. But it slows down the entire process of testing. If a particular action requires waiting for more than the default time, it is better to synchronize only that action rather than increasing the delay interval manually. This can be done by creating a "Positive Wait State" or "Negative Wait State".
When "Positive Wait State" is defined, SQA Robot waits until the selected region matches the area in the application that is tested. If a match occurs before the timeout time, the next statements will be executed, otherwise an error message will be displayed.
When "Negative Wait State" is defined, SQA Robot waits until the selected region
does not match the area in the application that is tested.

Monday, July 5, 2010

What is the architecture of SilkTest ?

When you test an application GUI, Gui objects are manipulated such as windows, menus, buttons etc. using mouse clicks and keyboard. SilkTest interprets these objects and recognizes them based on its class properties and methods that uniquely identify them. SilkTest simulates the operations done by the user (mouse clicks and keyboard entries) and verifies the expected results automatically.
SilkTest has two components that execute as separate processes:
- Host Software
- Agent Software
The machine on which the host software component runs is called the host machine. Host software is the component that is used to develop test plans and test scripts. Test scripts can be created, edited, compiled, run the scripts and then debug them.
The agent software is the component that interacts with the GUI of your application. The agent software drives and monitors the Application Under Test (AUT). It translates the commands in the 4Test script into the GUI specific commands. The AUT and the agent software have to run on the same machine. Each GUI object has a unique match to the 4Test object.
SilkTest testing process involves four steps:
- Creating a test plan.
- Recording a test frame.
- Creating test cases.
- Running test cases and interpreting these results.

Sunday, July 4, 2010

What is Silk Test and what are its features?

Segue Software's SilkTest can be used for testing a variety of applications such as
- standalone Java, Visual Basic, and Win32 applications.
- PowerBuilder applications.
- Databases.
- Dynamic Linked Libraries (DLLs).
- Web Sites written in DHTML, HTML, XML, JavaScript, ActiveX, Applet, using Internet Explorer and Netscape Navigator.
- AS/400 and 3270/5250 applications.

Important Features of SilkTest

- To facilitate unattended testing, SilkTest has an in-built customizable recovery system. So, while automated testing is in progress, even if the application fails in between, automatically the test continues without halt.
- It has an object-oriented scripting language called 4Test; using the scripts written in 4Test, an application can be tested on different platforms.
- It can access the database and validation can be done.
- For test creation and customization, workflow elements are available.
- Test Planning, management and reporting can be done by integration with other tools of Segue Software.

Saturday, July 3, 2010

What is WinRunner and what are its important aspects ?

Mercury Interactive's WinRunner is a testing tool to do functional/regression testing. Using WinRunner, GUI operations can be recorded. While recording, WinRunner automatically creates a test script. This test script can be run automatically later on for carrying an unattended testing.
Important features of WinRunner are :
- Functional/Regression testing of a variety of application software written in programming languages such as PowerBuilder, Visual Basic, C/C++, and Java can be done.
- Testing of ERP/CRM software packages can also be carried using WinRunner.
- Testing in all flavors of Windows operating systems and different browser environments such as Internet Explorer and Netscape Navigator can be done.
- GUI operations can be recorded in the record mode. WinRunner automatically creates a test script. This test can be modified if required and can be executed later on in unattended mode.
- Checkpoints can be added to compare actual and expected results. The checkpoints can be GUI checkpoints, bitmap checkpoints and web links.
- Facility for synchronization of test cases is provided.
- Data Driver Wizard provides the facility to convert a recorded test into a data driven test.
- Database checkpoints are used to verify data in a database during automated testing.
- The Virtual Object Wizard of WinRunner is used to teach WinRunner to recognize, record, and replay custom objects.
- The reporting tools provide the facility to generate automatically the test reports and analyze the defects.
- WinRunner can be integrated with the testing management tool TestDirector to automate many of the activities in the testing process.

Friday, July 2, 2010

What are different conditions in which you can select a testing tool?

The requirement of a testing tool can be judged if you analyze the existing scenario in your organization.
- Are your customers happy with the software delivered by you. If they are unhappy and report about the bugs that they have faced, then the testing process needs to be improved. Now, the organization needs to consider an option of using a testing tool.
- If the customers are happy with the product delivered, then find out the effort, time and money spent on the testing phase. The productivity of people can be increased by using these tools.
- If the manpower attrition rate is very high, then the most likely reason is that they were getting bored doing repetitive things. Testing tools remove the manual work and hence the test engineers enjoy their work.
- If performance testing has to be done on your software, tools come in handy.
- If the organization believes in a process-oriented approach to testing, testing tools will be of immense use.
- If project teams are located at different locations, a web based management tool will be very effective for efficient management of testing process.
Once the decision of using a testing tool is made, you need to make a choice of which testing tool should be used based on:

- Most of the application software packages need to be tested for functionality. As software is modified, regression testing is also a must. Hence,functional/regression testing tools are must.
- Load testing is preferred if the software is C/S application or a web application. A performance testing tool is also used.
- Testing management tools can be integrated with the above tools also.
- Once the type of testing tools required are decided, obtain the tool for the environment in which the software runs.
- Testing tool vendors also give tools that support testing for software packages such as Siebel, SAP etc.
- To test the source code, you need to but source code testing tools.
- Bug tracking tolls need to be used for tracking bugs in large software development project.
- Testing management tools are a must if the organization is keen on implementing process-oriented testing.
- Web enabled testing tool is of great use if the project teams are at different places.

Thursday, July 1, 2010

Testing Management Tools and Source Code Testing Tools

Testing Management Tools

A rigorous process for testing is followed. this process involves working out a test plan, test cases, deciding the schedule for running various tests, generating and analyzing the test reports and tracking the bugs and checking whether the bugs have been removed and doing regression testing. Many managerial activities need to be done to effectively manage the testing process.
Now-a-days software companies have offices around the world. Different groups work at different locations on the same project. The testing and the development team are also located at different places. In such a case, the testing management tools can be used effectively for managing the testing process. Many testing management tools are web based. A test engineer can log in to the web site and update the defect report. The QA manager located at another place can login to the web site, check the status and assign the bug removal work to a developer. The bug status is updated to'corrected' after the developer removes the bug.
Testing management tools facilitate all these process-oriented activities to be done systematically. Mercury Interactive's TestDirector can be used for managing the testing process.

Source Code Testing Tools
These tools are specific to the programming language used for developing the software. Different tools used are :
- Lint, a utility used to test the portability of the code.
- Line profilers used to do the time analysis. these profilers find out the execution time for the entire program as well as for individual function calls.
- SCCS and RCS, source code configuration management tools.

Facebook activity