- Preventative approach to testing and
- Reactive approach to testing
What is Preventive Approach to testing?
What is Reactive approach to testing?
Importance of both the approaches
- Reviews
- Static analysis
Articles, comments, queries about the processes of Software Product Development, Software Testing Tutorial, Software Processes .
Posted by
Sunflower
at
6/27/2012 10:45:00 PM
0
comments
Labels: Application, Approaches, Code, Customers, Design, Developers, Faults, Performance, Preventive, Quality, Reactive, Requirements, SDLC, Software System, Stages, Test Strategy, Testers, Testing, Tests, Validation
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
5/23/2012 03:05:00 PM
0
comments
Labels: Application, Architecture, Baselines, Components, Construction, Executable, Goals, Implementation, Phases, Primary, Project, Requirements, Risk factors, Risks, Software Systems, Stages, Unified Process, UP, Validation
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
5/22/2012 11:55:00 PM
0
comments
Labels: Application, Architecture, Baselines, Components, Elaboration, Executable, Goals, Implementation, Phases, Primary, Project, Requirements, Risk factors, Risks, Software Systems, Stages, Unified Process, UP, Validation
![]() | Subscribe by Email |
|
|
Posted by
Sunflower
at
5/11/2012 11:57:00 PM
0
comments
Labels: Agile, Agile Model Driven Development, AMDD, Automated, Code, Comparisons, Development, Functionality, Re-factoring, Requirements, Software development, Stages, Steps, TDD, Test Driven Development, Tests, Validation
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
4/25/2012 11:56:00 PM
0
comments
Labels: Application, attacks, Bugs, Code, Data, Defects, Directory structure, Directory Traversal attacks, End users, files, Operating Systems, Password, Quality, Security, Software Systems, Threats, Traverse, User, Validation
![]() | Subscribe by Email |
|
Regression testing is a very common software testing methodology and its importance is not hidden from us. Regression testing forms a part of software testing life cycle of every software system or application and project is finalised before running it at least once under the regression testing.
Like the other software testing methodologies the regression has also defined some entry and exit criteria for itself that a software system or application needs to fulfill satisfactorily to undergo regression testing.
But first we will state a brief discussion regarding the regression testing since then it will be easy for us to recognize the entry and exit criteria for the regression testing.
Posted by
Sunflower
at
3/30/2012 01:31:00 PM
0
comments
Labels: Application, Bugs, Changes, Components, Criterion, Defects, Entry, Errors, Exit, Failure, Faults, Functionality, Quality, Regression, Regression Testing, Software testing, STLC, Validation
![]() | Subscribe by Email |
|
The term GUI testing is a self justifying term and states that it is a testing that tests the graphical user interface of the software system or application against its specifications and requirements mentioned in the specifications documents.
This article is all about the testing of some of the aspects of a graphical user interface. All the aspects of a graphical user interface are tested using a whole lot of testing techniques.
The test cases are generated by a test designer who has all the knowledge about the application and the tests are designed as such that they cover all the functionality of the system.
1. Steps for Testing of the Text Box
- Firstly, the requirements of the text box are identified and the default values of the text box and the button are tested without any text in the text box.
- The second step is the checking of the NULL condition in which it is checked that whether or not a text of NULL value can be saved.
- Thirdly, the space condition is checked which ensures that a text box can save a space character.
- Fourth, the boundary value condition is checked in which the minimum and maximum text holding value of the text box is tested.
2. Steps for Testing of the Radio Buttons
- Radio buttons are the buttons that are used in making a selection in the lists that contains options that mutually exclusive and only one option has to be selected.
- In the radio button testing it is tested that whether or not clicking on one of the options deselects the other one.
3. Steps for Testing of the Command Buttons
- The working of the command button i.e., whether or not it is following the command which has been evoked. The execution of the command is also tested.
4. steps for Testing of the Aesthetic Conditions
Aesthetic conditions regarding the colour of the background, resizing of the screen, color of the field prompts, font of the text, spelling of the prompts etc are tested using appropriate testing methodologies.
5. Testing of the validation conditions
This testing tests the validation process of the GUI using various white box testing techniques.
6. Steps for Testing of the Usability Conditions
- Usability conditions of the GUI of the applications are tested by the users since a direct input is obtained from the users and is focussed up on the capacity of the product to match with the standards specified by the user.
- Usability refers to the ease of use of any software artefact and it gives a measure of the ease with which the interaction between the user and the application takes place.
- Testing the usability conditions often helps the developers to understand the needs of the customers and helps in focussing the whole development process towards the needs of the customers.
- The rate of the sales and the task completion increases and the number of enquiries in the call center is also reduced.
- Usability is also an effective technique to increase the experience of the end user making it easier to understand and more intuitive.
- The usability testing can also be carried out by the expert evaluators instead of making users do that. The below mentioned aspects are tested:
(a) Performance
(b) User friendliness
(c) Efficiency
(d) Visual design
(e) Consistency
7. Steps for Testing of Data Integrity Conditions
- This era is marked by the increasing demands for the accountability and mobility and thus data integrity conditions need to be tested.
- This is done by using different white box testing techniques for different conditions.
8. Steps for Testing of the Data field checks and Alpha field checks
It is tested whether or not the data fields accept the data of their specified type.
Posted by
Sunflower
at
3/29/2012 01:28:00 PM
0
comments
Labels: Aesthetic, Application, Command Button, Conditions, Data Integrity, Graphical User Interface, GUI, GUI testing, Interface, Radio Button, Software Systems, Test cases, Text Box, Usability, Validation
![]() | Subscribe by Email |
|
It is a world wide established fact that the software testing has proved all the way very effective in improving the satisfaction of the clients and customers by delivering them a software product that is quite free of defects, errors and bugs.
Only the stable, error free and reliable software systems and applications are accepted in to the business. There is no place for the unstable and buggy software systems.
Posted by
Sunflower
at
3/16/2012 04:36:00 PM
0
comments
Labels: Application, Benefits, Bugs, Clients, Compatibility, Customers, Defects, efficient, Effort, Errors, Failure, Purpose, Requirements, Software Systems, Software testing, Time, Validation, Verification
![]() | Subscribe by Email |
|
Software requirements specifications is often called “SRS” in its short form. We
can simply put it as a specific requirement for a software system or application.
PURPOSE OF SOFTWARE REQUIREMENTS SPECIFICATION
- These requirements specifications give a view of how exactly the system should and how it should behave.
- The software requirements specification comes with a set of cases describing all the possible interactions of the users with the software system or application.
- Such test cases have been termed as ‘use cases”.
- The software requirements specifications contains both kinds of requirements i.e., functional requirements as well as non functional requirements.
- Software requirements specification is an important sub field under the software engineering field.
- The software requirement specification provides a way to enlist all the requirements necessary for the development of the software project in one place.
- It deals with the following issues with regard to the software system or application:
1. Specification
2. Elicitation
3. Analysis and
4. Validation
OVERVIEW OF SOFTWARE REQUIREMENT SPECIFICATION
1.Introduction
This mentions the purpose of the software along with its scope, definitions, a brief overview and a set of references.
2.Overall Description
This section of the SRS describes the perspective, functions, user characteristics, constraints, dependencies and assumptions of the software application.
3. Specific Requirements
This is the most important part of an SRS and includes description of functions, interfaces, logical data base requirements, performance requirements, design constraints and key features.
HOW TO PRODUCE AN EFFECTIVE SOFTWARE REQUIREMENTS SPECIFICATION?
- For producing an effective software requirements specification, you need to test it.
- Therefore a software requirements specification testing has been designed.
- It is popularly known as requirements analysis.
- Requirements analysis analyzes all those tasks that help in identifying the requirement specifications of the software system as well as the requirements specifications itself.
- This actually forms the initial stage of the requirements engineering which is again concerned with the above listed activities.
- The analysis of the requirements specification is crucial for any software system or application.
- All the identified requirements should have the following properties:
1.Actionable
2.Can be documented
3.Testable
4.Traceable
5.Measurable and
6.Defined
PHASES OF SOFTWARE REQUIREMENTS SPECIFICATION TESTING
The software requirements specification testing comprises of the following three phases:
#1. Elicitation of the Requirements:
- This phase involves the identification of the requirements of the consumers.
- This process of communicating with the users and gathering requirements is very well known as requirements gathering.
#2. Analysis of the Requirements:
- This phase involves the determination of the clarity, completeness, ambiguity, contradiction of the requirements.
- If issues are found, they are resolved.
#3. Recording of the Requirements
- There are various ways in which the requirements might be documented.
- Whatever the way maybe, it should be clear and concise.
- Some commonly used methods are: natural language documents, user stories, process specifications and use cases.
Posted by
Sunflower
at
2/29/2012 02:11:00 PM
0
comments
Labels: Analysis, Application, Constraints, Development, Phases, Purpose, Requirements, Scope, Software Requirements Specification, Software Systems, Specification, SRS, Use cases, Users, Validation
![]() | Subscribe by Email |
|
Conventional testing and unconventional testing are a less heard testing methodology. Before we discuss about these two methods, let's discuss about the “quality management system” or QMS as it is called in its short form.
QUALITY MANAGEMENT SYSTEM
- Quality management system can be thought of as an organizational structure which states and manages the processes, procedures and resources to be implemented for better quality management.
- Earlier random sampling and simple statistics were used for predicting the output of a test in production line.
- But eventually by the end of the 19th century the entering the data manually for these test cases was considered to be a costly method.
- Later in the 21st century the quality management system succeeded in overcoming this problem.
- It came with a new transparent and sustainable technology which gradually achieved a wide customer satisfaction.
CONVENTIONAL AND UNCONVENTIONAL TESTING
- Conventional testing is nothing but a similar initiative of quality management system.
- Conventional testing is entirely based up on the standards and conventions of the testing as defined by the quality management system.
- It is a way of maintaining the testing standards.
- Since the conventional testing is guided by some conventions, hence it was named so.
- Unconventional testing, we can say just by looking at the name that it doesn’t follow any conventions.
DIFFERENCES BETWEEN CONVENTIONAL & UNCONVENTIONAL TESTING
DIFFERENCE #1:
- In conventional testing, only features and functionality of a software system or application are tested by the engineer in charge of the testing cycle.
- In unconventional testing, only the documentation is verified on the basis of the quality assurance principles.
DIFFERENCE #2:
- In unconventional testing, the documentation is tested from the starting phase of SDLC (systems development life cycle) by the testers of quality assurance.
- Conventional testing comes into play only during the testing phase of the systems development life cycle.
DIFFERENCE #3:
- In conventional testing, the developed components of the application are checked by the tester for whether they are working according to the expectations of the consumers or not.
- A typical unconventional testing starts from the coding phase of the systems development life cycle.
DIFFERENCE #4:
- Unconventional testing keeps a track of the whole software development process on the basis of the quality assurance principles i.e. whether or not the software has been developed according to the guidelines and specifications provided by the client company.
- The conventional testing is focused more up on the functionality of the software system or application.
USES & LIMITATIONS OF CONVENTIONAL TESTING
- Conventional testing is being used in migration projects these days.
- It sometimes happens that the testers performing the conventional testing have a very little knowledge about that particular application software which they are testing. In that the comparison testing is employed.
- This is preferred because here it is not required that the tester should know what will be the outcome.
- This problem can be solved also if the development team has already prepared the test scripts for the tester.
- Conventional testing is a bit expensive as it requires a lot of time to test, verify and validate each and every test script.
- There will be a certain number of errors in a program that is of course obvious and depends up on the degree of complexity of the program
- Errors also depend on the kind of migration process being followed in the project.
- The aspects of the software system or application failing these tests have to be corrected and retested accordingly.
Posted by
Sunflower
at
2/27/2012 11:59:00 PM
0
comments
Labels: Application, Conventional Testing, Differences, Guidelines, Procedures, Quality, Resources, SDLC, Software Systems, Software testing, Test cases, Tester, Tests, Unconventional testing, Validation
![]() | Subscribe by Email |
|
None of the processes in the world are carried out without a purpose. Every kind of software testing has got some objectives. This article discusses such objectives only.
Software testing has got several objectives. Objectives are decided on the basis of the expectations of the software developers.
- Software testing is expected to distinguish between the validations of the software system and defects present in the software system.
- It is necessary to describe the principles on which the software works and processes the data.
- It is also important for us to know the principles of different kinds of testing.
- Before creating the tests cases for testing one should decide for a proper strategy to follow so as to achieve the desired objectives.
- The tester should understand the characteristics and behavior of the tools that are being used for the test automation.
- Before testing any software system, the tester should know the problems that can cause the system to fail. Failures should be known otherwise it will be difficult to prevent the potential harm.
- When the software developer pens down the objectives of the testing, he should keep in mind the requirements of the customers.
- Apart from the requirements it is also necessary to keep knowledge about the non requirements of the users.
Both these requirements and non requirements form a major an important part of the objectives of testing. In fact, we can say that 95 % of the objectives of the testing are based on the requirements and non requirements of the user.
There is one more kind of requirements called the missing requirements.
- Missing requirements are the requirements that are needed but they are absent form both the customer’s requirements list and non requirements list.
- Only the software developer or the tester can figure out the missing requirements.
- These missing requirements also form a small part of the objectives of the testing.
- There are some requirements needed for the software system but they are almost impossible to implement.
Below mentioned are the objectives of software testing clearly and in detail:
- To check whether the system is working as required or not.
- To prove that the software system is free of any errors.
- To certify that the particular software system is correct to the best knowledge of the programmer and the tester.
- To certify that the software can be used without any fear of losing data or damage.
For achieving the objectives, testing can be done in following two ways:
Negative testing
- This testing tests for the abnormal or negative operations in the software system.
- This is carried out by using illegal or invalid data.
- In negative testing, the tester intentionally tries to make the things go wrong and checks what happens then.
- Based on the observation, improvement is made further. It checks if the program crashes or not, if the program does any unexpected thing?, whether the software system successfully achieves the target or not?
Positive testing
- In this kind of testing, the software system is operated normally with correct data input values.
- Proper test cases are used.
- This testing methodology includes testing of system software at the boundaries of the program.
- This is done to determine the correctness of the program. Actual result and the expected result is compared and it is determined whether or not the program is behaving normally?,results coincide with the expected results?, software system is still functioning properly or not?
Not to much surprise, the negative testing has got a positive side. It checks out all the flaws, errors and the discrepancies before they show up in the front of the user. After all, a testing is regarded good only if it fails the software system.
Posted by
Sunflower
at
11/30/2011 02:00:00 AM
0
comments
Labels: Data, Defects, Errors, Failures, Missing Requirements, Negative Testing, Objectives, Positive Testing, Principles, Problems, Requirements, Software testing, Strategy, Tools, Validation
![]() | Subscribe by Email |
|
Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards.
Though verification and validation processes are frequently grouped together, there are plenty of differences between them:
- Verification is a process which controls the quality and is used to determine whether the software system meets the expected standards or not. Verification can be done during development phase or during production phase. In contrast to this, validation is a process which assures quality. It gives an assurance that the software artifact or the system is successful in accomplishing what it is intended to do.
- Verification is an internal process whereas validation is an external process.
- Verification refers to the needs of the users while validation refers to the correctness of the implementation of the specifications by the software system or application.
- Verification process consists of following processes: installation, qualification, operational qualification, and performance qualification whereas Validation is categorized into:
prospective validation
retrospective validation
full scale validation
partial scale validation
cross validation
concurrent validation
- Verification ensures that the software system meets all the functionality whereas validation ensures that functionalities exhibit the intended behavior.
- Verification takes place first and then validation is done. Verification checks for documentation, code, plans, specifications and requirements while validation checks the whole product.
- Input for verification includes issues lists, inspection meetings, checklists, meetings and reviews. Input for validation includes the software artifact itself.
- Verification is done by developers of the software product whereas validation is done by the testers and it is done against the requirements.
- Verification is a kind of static testing where the functionalities of a software system are checked whether they are correct or not and it includes techniques like walkthroughs, reviews and inspections etc. In contrast to verification, validation is a dynamic kind of testing where the software application is checked against its proper execution.
- Mostly reviews form a part of verification process whereas audits are a major part of validation process.
Verification, Validation, and Testing of Engineered Systems | Fundamentals of Verification and Validation | Verification and Validation in Computational Science and Engineering |
Posted by
Sunflower
at
11/24/2011 08:22:00 PM
0
comments
Labels: Applications, Control, Development, Differences, Functional, Methodology, Methods, Physical, Processes, program, Quality, Review, Software testing, Speech, Types, Validation, Verification, Verify
![]() | Subscribe by Email |
|
Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards. There are various methodologies for verification different kinds of data in software applications. The different methods have been discussed below:
- File verification
It is used to check the integrity and the level of correctness of file. It is used to detect errors in the file.
- CAPTCHA
It is a kind of device that is used to verify that the user of the website is a human being and not some false program intended to hamper the security of the system.
- Speech verification
This kind of verification is used to check the correctness of the spoken statements and sentences.
- Verify command in DOS.
Apart from verification techniques for software applications there are several other techniques for verification during the development of software. They have been discussed below:
- Intelligence verification
This type of verification is used to adapt the test bench changes to the changes in RTL automatically.
- Formal verification
It is used to verify the algorithms of the program for their correctness by some mathematical techniques.
- Run time verification
Run time verification is carried out during execution. It is done to determine if the program is able to execute properly and within the specified time or not.
- Software verification
This verification type uses several methodologies for the verification of the software.
There are several other techniques used for verification in circuit development. - Functional verification
- Physical verification
- Analog verification
Verification, Validation, and Testing of Engineered Systems | Fundamentals of Verification and Validation | Verification and Validation in Computational Science and Engineering |
Posted by
Sunflower
at
11/23/2011 08:14:00 PM
0
comments
Labels: Algorithms, Analog, Applications, Development, files, Functional, Methodology, Methods, Physical, program, Review, Software testing, Speech, Types, Validation, Verification, Verify
![]() | Subscribe by Email |
|
Validation tries to uncover errors, but the focus is at the requirements level, i.e. on the things that will be immediately apparent to the end user. It begins at the end of integration testing, when all individual modules are packaged and interface errors are uncovered and corrected. In validation testing phase, testing focuses on user visible actions and output that is user recognizable. The criteria of software entering into validation phase is that it functions in a manner that is reasonably expected by the customer.
In software requirements specification, there is a section called validation test criteria. Test plan lists out the tests to be conducted and a test procedure defines test cases. These plan and procedure are designed to ensure that all the functional requirements are satisfied, behavioral characteristics are achieved, performance requirements are attained, usability is met and documentation is done.
Configuration review ensures that all elements of software configuration are properly developed, cataloged and every necessary detail is given. It is also known as audit.
Alpha testing is done at developer's site. It does not happen at usual workplace. The real users are simulated by using these techniques and carrying out tasks and operations that a typical user might perform.
Beta testing is done at end user sites. The developer is not present. It is the live application of software in an environment that is not controlled by the developer. The end user records all the problems that he faces and reports to the developer.
Posted by
Sunflower
at
7/27/2011 01:38:00 PM
0
comments
Labels: Alpha testing, Beta testing, Configuration, Criteria, Customer, Developers, End users, Errors, Focus areas, Integration testing, Requirements, Software testing, Validation, Validation testing
![]() | Subscribe by Email |
|
In software engineering practice, construction practice includes coding and testing tasks and principles. Coding involves direct creation of source code, automatic generation of source code and automatic generation of executable code using fourth generation programming languages.
Posted by
Sunflower
at
7/14/2011 07:52:00 PM
0
comments
Labels: Architecture, Code, Coding, Concepts, Construction Practices, Practices, Preparation, Principles, Software, Software Engineering Practice, Understandability, Validation
![]() | Subscribe by Email |
|
The steps in deriving the test cases using use cases are:
- Using the RTM, the use cases are prioritized. Importance is gauged based on the frequency with which each function of the system is used.
- Use case scenarios are developed for each use case. The detailed description for each use case scenario can be very helpful in later stage.
- For each scenario, take at least one test case and identify the conditions that
will make it execute.
- Data values are determined for each test case.
After system testing is culminated, validation testing is performed which consists of a series of black box tests. It focuses on user-visible actions and user-recognizable output.
Alpha and Beta Testing are a series of acceptance tests. Alpha testing is performed in a controlled environment normally at developer's site. In alpha testing, developers record all errors and usage problems while end users use the system. Beta testing is done at customer's site and developers are not present. In beta testing, end-users records all errors and usage problems.
The amount of testing effort needed to test object oriented software can be indicated by the metrics used for object-oriented design quality. These metrics are:
- Lack of Cohesion in Methods (LCOM)
- Percent Public and Protected (PAP)
- Public Access To Data Members (PAD)
- Number of Root Classes (NOR)
- Number of Children (NOC) and Depth of the Inheritance Tree (DIT)
Posted by
Sunflower
at
4/26/2011 12:15:00 PM
0
comments
Labels: Alpha testing, Beta testing, Design, End users, Metrics, Object Oriented, Quality, Steps, System Testing, Test cases, Test Metrics, Tests, Use cases, Validation
![]() | Subscribe by Email |
|
Elaboration involves the information that is obtained from team during inception and elicitation is expanded and refined. It focuses on defining, redefining and refining of models. It tries to model the "WHAT" rather than the "HOW".
- Requirement is created using methods that capitalize on user scenarios.It describes how the end-users and actors interact with the system.
- The analysis model is derived from the requirements model where each scenario is analyzed to get the analysis classes.
- The requirements model and the analysis model are the main workproduct of this task.
Negotiation involves customers, stakeholders and software development team reconcile conflicts. The purpose of negotiation is to develop a project plan that meets the requirements of the user while reflecting real-world constraints such as time,people and budget. Negotiation includes:
- always remember negotiation is not completion.
- always have a strategy.
- always listen effectively.
- always focus on other party's interest.
- never make it personal.
- always be creative.
- be ready to commit.
Specification is the final artifact or work product produced by the software engineer during requirements engineering. It serves as the foundation for design and construction of software.
In Validation,the work products produced as a consequence of requirements engineering are assessed for quality. It checks whether inconsistencies, omissions, and errors have been detected and corrected. The
review team that validates the requirements look for errors in content or interpretation,areas where clarification is required, missing information, inconsistencies,conflicting and unrealistic requirements.
Management is a set of activities that help the project team identify, control, and track requirements and their changes at any time as the project progresses.
Posted by
Sunflower
at
4/05/2011 07:53:00 PM
0
comments
Labels: Concepts, Defects, Elaboration, Errors, Information, Management, Negotiation, Requirements, Requirements Engineering, Specification, Validation, Work Products
![]() | Subscribe by Email |
|
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the (ETVX) Entry-Task-Validation-Exit criteria are to be mentioned. Entry means the entry point to that phase. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done.
ETVX is a modeling technique for developing worldly and atomic level models. It is a task based model where the details of each task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task, Feedback In, Feedback Out, and measures.
There are two type of cells, unit cells and implementation cells. The implementation cells are basically unit cells containing the further tasks. A purpose is also stated and the viewer of the model may also be defined e.g. to management or customer.
Posted by
Sunflower
at
12/04/2010 12:48:00 PM
0
comments
Labels: ETVX, Functionality, Levels, Plan, Planning, Sections, Software testing, Strategy, Test Planning, Test ware development, Unit test plan, Units, Validation
![]() | Subscribe by Email |
|
The beta testing is conducted at one or more customer sites by the end-user of the software. The beta test is a live application of the software in an environment that cannot be controlled by the developer. The software reaches beta stage when most of the functionalities are operating. The software is tested in customer's environment, giving the user an opportunity to exercise the software, find the errors so that they could be fixed before product release. Beta testing is a detailed testing and needs to cover all the functionalities of the product and also the dependent functionality testing. It also involves the user interface testing and documentation testing. Hence, it is essential that this is planned well and the task accomplished. The test plan document has to be prepared before the testing phase is started, which clearly lays down the objectives, scope of test, tasks to be performed and the test matrix which depicts the schedule of testing.
The objectives of beta testing is to:
- evaluate software technical content.
- evaluate software ease of use.
- evaluate user documentation draft.
- identify errors.
- report errors/findings.
The role of a test lead is to provide test instruction sheet that describes items such as testing objectives, steps to follow, data to enter, functions to invoke and to provide feedback forms and comments.
The role of a tester is to understand the software requirements and the testing objectives and carry out the test cases and report defects.
Posted by
Sunflower
at
11/01/2010 08:01:00 PM
0
comments
Labels: Beta, Beta testing, Data, Defects, Errors, Objectives, Phases, Plan, Software, Software testing, Test Planning, Validation, Validation Phase
![]() | Subscribe by Email |
|
User Acceptance Testing occurs just before the software is released to the customer. The end-users along with the developers perform the User Acceptance Testing with a certain set of test cases and typical scenarios.
Installation testing is often the most under tested area in testing. this type of testing is performed to ensure that all installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed. Installation testing should take care of the following points:
- To check if while installing product checks for the dependent software/patches.
- The product should check for the version of the same product on the target machine, say the previous should not be over installed on the newer version.
- Installer should give a default installation path.
- Installation should allow user to install at location other than the default installation path.
- Check if the product can be installed "Over the Network".
- Installation should start automatically when the CD is inserted.
- Installer should give the Remove/Repair options.
- When uninstalling, check that all the registry keys, files, DLL, shortcuts, activeX components are removed from the system.
- Try to install the software without administrative privileges.
- Try installing on different operating system.
- Try installing on system having non-compliant configuration such as less memory/RAM/HDD.
Posted by
Sunflower
at
10/30/2010 03:19:00 PM
0
comments
Labels: Customer, Developers, Features, Installation, Installation testing, Phases, Product, Quality, Software testing, Test cases, User Acceptance testing, Users, Validation, Validation Phase
![]() | Subscribe by Email |
|