- Winrunner
executable
- Uninstall
winrunner
- Soft key
configuration
- Read me
- Fonts expert
- Sample
application folder and
- Documentation
folder etc.
- Verify
- Debug and
- Update
Articles, comments, queries about the processes of Software Product Development, Software Testing Tutorial, Software Processes .
Posted by
Sunflower
at
8/28/2012 11:41:00 AM
0
comments
Labels: Application, Automated Software Testing, Automation, Code, Commands, Debug, Debugging, Functional testing, Modes, Recording, Regression Testing, Statements, Test Scripts, Time, Toolbar, update, Verify, WinRunner
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
8/22/2012 03:25:00 PM
0
comments
Labels: Application, Automated, Automated Software Testing, Commands, Debug, Functions, Modes, Requirements, Results, Run Modes, Software System, Statements, Test Scripts, Tests, TSL, update, Verification, Verify, WinRunner
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
4/28/2012 04:33:00 PM
0
comments
Labels: Application, Data, Defects, Entry, Errors, Exit, Functions, Methodology, Parallel testing, Production Verification testing, Simulation, Software Development Methodology, STLC, User Acceptance, Verification, Verify
![]() | Subscribe by Email |
|
Performance means a lot more than actually just testing the performance of a software system or application. It covers a wide range of concepts of software engineering and functionalities.
In performance testing, a software system is not merely tested on the basis of its functionalities, specifications and requirements but, it is also tested on the basis of the software system’s or application’s final performance characteristics which are measurable.
- Performance testing is both quantitative and qualitative kind of testing.
- In the field of software engineering, performance testing is typically done to determine the effectiveness and speed of a software system, hardware system, computer or device etc.
- Being a quantitative process, performance testing involves some lab tests like measurement of response time and MIPS (short form for “millions of instructions per second”) at which a software system performs.
- It also involves tests for testing the qualitative assets of a system like scalability, reliability and inter- operability.
- Often performance testing and stress testing are performed conjunction-ally.
- It’s a general kind of testing done to determine the behavior of a system whether hardware or software in the terms of stability and responsiveness when the system is provided with a significant workload.
- It is also carried out to measure, validate, verify and investigate the qualitative attributes of the system like resilience and resource usage.
- Performance testing is a sub category under performance engineering.
- It’s a kind of testing which aims to incorporate performance into the architecture and design of software or a hardware system.
- It’s basically done before the actual coding of the program.
Performance testing consists of many sub categories of testing. Few have been discussed in details below:
1.Stress testing:
This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e., to say above the maximum limit.
2. Load testing:
This is simplest of all the testings. This testing is usually done to check the behavior of the application software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.
3. Spike testing:
This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or it fails.
Endurance testing:
As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.
5.Isolation testing:
This test is basically done to check for the faulty part of the program or the application software.
6.Configuration testing:
This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.
Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:
- Tells us whether the application software meets the performance criteria or not.
- It can compare the performance of two application soft wares.
- It can find faulty parts of the program.
Posted by
Sunflower
at
12/13/2011 07:29:00 PM
0
comments
Labels: Application, Behavior, Characteristics, Defects, Effectiveness, Errors, Functionality, Hardware, Performance, Performance testing, Quality, Requirements, Specification, Tests, Validate, Verify
![]() | Subscribe by Email |
|
Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards.
Though verification and validation processes are frequently grouped together, there are plenty of differences between them:
- Verification is a process which controls the quality and is used to determine whether the software system meets the expected standards or not. Verification can be done during development phase or during production phase. In contrast to this, validation is a process which assures quality. It gives an assurance that the software artifact or the system is successful in accomplishing what it is intended to do.
- Verification is an internal process whereas validation is an external process.
- Verification refers to the needs of the users while validation refers to the correctness of the implementation of the specifications by the software system or application.
- Verification process consists of following processes: installation, qualification, operational qualification, and performance qualification whereas Validation is categorized into:
prospective validation
retrospective validation
full scale validation
partial scale validation
cross validation
concurrent validation
- Verification ensures that the software system meets all the functionality whereas validation ensures that functionalities exhibit the intended behavior.
- Verification takes place first and then validation is done. Verification checks for documentation, code, plans, specifications and requirements while validation checks the whole product.
- Input for verification includes issues lists, inspection meetings, checklists, meetings and reviews. Input for validation includes the software artifact itself.
- Verification is done by developers of the software product whereas validation is done by the testers and it is done against the requirements.
- Verification is a kind of static testing where the functionalities of a software system are checked whether they are correct or not and it includes techniques like walkthroughs, reviews and inspections etc. In contrast to verification, validation is a dynamic kind of testing where the software application is checked against its proper execution.
- Mostly reviews form a part of verification process whereas audits are a major part of validation process.
Verification, Validation, and Testing of Engineered Systems | Fundamentals of Verification and Validation | Verification and Validation in Computational Science and Engineering |
Posted by
Sunflower
at
11/24/2011 08:22:00 PM
0
comments
Labels: Applications, Control, Development, Differences, Functional, Methodology, Methods, Physical, Processes, program, Quality, Review, Software testing, Speech, Types, Validation, Verification, Verify
![]() | Subscribe by Email |
|
Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards. There are various methodologies for verification different kinds of data in software applications. The different methods have been discussed below:
- File verification
It is used to check the integrity and the level of correctness of file. It is used to detect errors in the file.
- CAPTCHA
It is a kind of device that is used to verify that the user of the website is a human being and not some false program intended to hamper the security of the system.
- Speech verification
This kind of verification is used to check the correctness of the spoken statements and sentences.
- Verify command in DOS.
Apart from verification techniques for software applications there are several other techniques for verification during the development of software. They have been discussed below:
- Intelligence verification
This type of verification is used to adapt the test bench changes to the changes in RTL automatically.
- Formal verification
It is used to verify the algorithms of the program for their correctness by some mathematical techniques.
- Run time verification
Run time verification is carried out during execution. It is done to determine if the program is able to execute properly and within the specified time or not.
- Software verification
This verification type uses several methodologies for the verification of the software.
There are several other techniques used for verification in circuit development. - Functional verification
- Physical verification
- Analog verification
Verification, Validation, and Testing of Engineered Systems | Fundamentals of Verification and Validation | Verification and Validation in Computational Science and Engineering |
Posted by
Sunflower
at
11/23/2011 08:14:00 PM
0
comments
Labels: Algorithms, Analog, Applications, Development, files, Functional, Methodology, Methods, Physical, program, Review, Software testing, Speech, Types, Validation, Verification, Verify
![]() | Subscribe by Email |
|
Fagan's Inspection Method is introduced by Fagon. Apart from checking codes of programs,it is used to check other work products such as technical documents, model elements, data and code design etc. It follows certain procedural rules that each member should follow:
- The time limit for an inspection meeting is for two hours.
- Inspections are led by a trained moderator.
- Inspections are carried out at a number of points in the process of project planning and systems development.
- All classes of defects in documentation and work product are inspected.
- Inspection is carried out by colleagues at all levels of seniority except the big boss.
- Inspectors are assigned specific roles to increase effectiveness.
- Statistics on types of errors are key, and used for reports which are analyzed in a manner similar to financial analysis.
Different activities that are involved in conducting inspections are:
- Planning is very important and in this case the moderator is asked to build up a plan.
- Presentation should be given which gives an overall overview.
- Each inspector is given 1 to 2 hours alone to inspect the workproduct.
- Meeting should be held in which participants of the meeting are the inspectors, moderator and the developer of the work product.
- The defect list is given for repair.
- Follow up with the repair work.
- Casual analysis meeting is held where inspectors are given a chance to express their personal view on errors and improvements.
Posted by
Sunflower
at
3/29/2011 01:53:00 PM
0
comments
Labels: Errors, Fagan technical Review, Formal Technical reviews, Guidelines, Inspections, Meetings, Methods, Objectives, Quality, Requirements, Reviews, Software, Standards, Technical Reviews, Verify
![]() | Subscribe by Email |
|
When tasks are performed in software process, the result is a work product. These results contribute to the development of quality software.
A formal technical review (FTR) is a software quality assurance activity performed by software engineers with the following objectives:
- uncover errors in function, logic or implementation of the software.
- verify that the software meets its requirements.
- ensure that the software has been developed according to the standards.
- achieve uniform software.
- make projects manageable.
Each formal technical review is conducted as a meeting and is considered successful only if it is properly planned, controlled and attended.
The purpose of formal technical review serves as a training ground for junior engineers and to promote backup and continuity.
Constraints of formal technical review meeting’s include 3-5 people involvement, advanced preparation not more than 2 hours for each person, the duration of the review meeting should be less than 2 hours and focus on a specific part of a software product.
There are few guidelines while conducting formal technical reviews. They are:
- Work product should be reviewed and not the developer.
- Make a practice to write down notes while conducting reviews.
- Agenda should be planned.
- Minimize the debate and discussions.
- Keep the number of participants to a minimum and insist on preparing for the review.
- The defect areas should be pointed but no solution should be provided.
- A checklist that is to be reviewed is provided.
- Schedule the reviews as part of the software process and ensure that resources are provided for each reviewer
- Check the effectiveness of review.
Posted by
Sunflower
at
3/29/2011 01:31:00 PM
0
comments
Labels: Aim, Constraints, Errors, Formal Technical reviews, Guidelines, Meetings, Objectives, Quality, Requirements, Reviews, Software, Standards, Technical Reviews, Verify
![]() | Subscribe by Email |
|
REPORTING ON RESPONSE TIME AT VARIOUS LEVELS OF LOAD
Expected output from a load test often includes a series of response time measures at various levels of load. It is important when determining the response time at any particular level of load, that the system has run in a stable manner for a significant amount of time before taking measurements.
For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize. Taking measurements over the next ten minutes would then give a meaningful result. The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.
FAIL-OVER TESTS
Failover tests verify of redundancy mechanisms while the system is under load. This is in contrast to load tests which are conducted under anticipated load with no component failure during the course of a test. For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies.
Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outrage. It also provides a baseline of failover capability so that a sick server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load.
FAIL-BACK TESTING
After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.
Posted by
Sunflower
at
12/15/2010 02:50:00 PM
0
comments
Labels: Components, Fail-back testing, Fail-over testing, Failure, Load, Load Test, Load Testing, LoadRunner, Quality, Software testing, Stress, Test cases, Verify
![]() | Subscribe by Email |
|
Inspections are static analysis techniques that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include :
- code inspection
- design inspection
- architectural inspections
- test ware inspections
The participants in inspections include inspection leader, recorder, reader, author, inspector. All participants in the review are inspectors. The author should not act as an inspection leader, reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role. Individuals holding management positions over nay member of the inspection team shall not participate in the inspection.
Input Criteria includes:
- Statement of objectives for the inspection.
- The software product to be inspected.
- Documented inspection procedure.
- Inspection reporting forms.
- Current anomalies or issues list.
- Inspection checklists.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Hardware product specifications.
- Hardware performance data.
- Anomaly categories.
The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate re-work and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
- Accept with no or minor re-work : The software product is accepted as is or with only minor re-work.
- Accept with re-work verification : The software product is to be accepted after the inspection leader or a designated member of the inspection team verifies re-work.
- Re-inspect : Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection.
Posted by
Sunflower
at
10/04/2010 02:36:00 PM
0
comments
Labels: Development, Inspections, Participants, Problems, Product, Software, Software testing, Strategies, Strategy, Techniques, Verification, Verify
![]() | Subscribe by Email |
|
Technical reviews confirm that product conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification.
The main objectives of technical reviews are as follows:
- Ensure that the software confirms to the organization standards.
- Ensure that any changes in the development procedures are implemented as per the organization pre-defined standards.
In technical reviews, the following software products are reviewed:
- Software requirements specification.
- Software design description.
- Software test documentation.
- Software user documentation.
- Installation procedure.
- Release notes.
The participants of the review play the roles of decision-maker, review leader, recorder, technical staff.
Requirement Review : A process or meeting during which the requirements for a system, hardware item or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review. Product management leads the requirement review. Members from every affected department participates in the review.
Input Criteria: Software requirement specification is the essential document for the review. A checklist can be used for the review.
Exit Criteria: It includes the filled and completed checklist with the reviewers comments and suggestions and the re-verification whether they are incorporated in the documents.
Posted by
Sunflower
at
9/30/2010 10:41:00 AM
1 comments
Labels: Bugs, Defects, Documentation, Organization, Process, Requirement Review, Requirements, Reviews, Software, Software testing, Strategies, Technical Reviews, Verification, Verify
![]() | Subscribe by Email |
|