- Statistics
- Cognitive
psychology
- Artificial
intelligence
- Computational
neuroscience
- Theoretical
neuroscience
- Supervised
learning
- Un –
supervised learning
- Reinforcement
learning
Articles, comments, queries about the processes of Software Product Development, Software Testing Tutorial, Software Processes .
Posted by
Sunflower
at
3/22/2013 08:39:00 PM
0
comments
Labels: Algorithms, ANN, Application, Approach, Artificial, Artificial Neural Network, Behavior, Design, Elements, Global, Information, Inputs, Network, Neural Networks, Neurons, Outputs, Practical, System, Units
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
3/20/2013 08:17:00 PM
0
comments
Labels: Access, Application, Autonomic Networking, Autonomic Systems, Components, Conditions, Configuration, Data, Design, Inputs, Interface, Network, Networking, Performance, States, Switches, System
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
7/03/2012 02:01:00 PM
0
comments
Labels: Advantages, Application, Assertions, Code, Disadvantages, Errors, Faults, Functional testing, Inferences, Inputs, Methodology, Random testing, Software System, Software testing, Techniques, Test cases, Tests
![]() | Subscribe by Email |
|
Posted by
Sunflower
at
6/28/2012 11:10:00 PM
0
comments
Labels: Advantages, Application, Black box testing, Combination, Conditions, Decision table testing, Decisions, Disadvantages, Framework, Inputs, Output, Portions, Rules, Software System, Techniques, Testing, Web Application
![]() | Subscribe by Email |
|
To make any process successful it has to be controlled in a way that it achieves its predefined goals or we can say it should be guided along the path of success. This fact holds good for any type of process in this world and so does for the processes in the field of software engineering.
In the field of software engineering, two approaches have been identified for keeping a control over the development and other related processes namely:
1. The empirical process control method and
2. The defined process control method.
In this article the above mentioned two approaches have been compared so that you get a better understanding of both the programs. So let us see how these two approaches control the processes.
Posted by
Sunflower
at
4/05/2012 11:15:00 AM
0
comments
Labels: adaptation, Approach, Control, Defined, Empirical, Information, Inputs, Inspections, Models, Outputs, Prescriptive, Processes, Results, Software development, software engineering
![]() | Subscribe by Email |
|
White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs.
White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests. White-box testing includes designing of test cases and an internal perspective of the software system.
Expert programming skills are needed to design test cases and internal structure of the program i.e., in short to perform white box testing. The tester or the person who is performing white box tests inputs some certain specified data to the code and checks for the output whether it is as expected or not. There are certain levels only at which white box testing can be applied.
The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level
Even though there’s no problem in applying white box testing at all the 6 levels, it is usually performed at the unit level which is the basic level of software testing.
White box testing is required to test paths through a source codes, between systems and sub systems and also between different units during the integration of the software application.
White box testing can effectively show up hidden errors and grave problems.But, it is incapable of detecting the missing requirements and unimplemented parts of the given specifications. White box testing includes basically four kinds of basic and important testings. These have been listed below:
- Data flow testing
- Control flow testing
- Path testing and
- Branch testing
In the field of penetration testing, white box testing can be defined as a methodology in which a hacker has the total knowledge of the hacked system. So we can say that the white box testing is based on the idea of “how the system works?” it analyzes flow of data, flow of information, flow of control, coding practices and handling of errors and exceptions in the software system.
White box testing is done to ensure that whether the system is working as intended or not and it also validates the implemented source code for its control flow and design, security functionalities and to check for the vulnerable parts of the program.
White box testing cannot be performed without accessing the source code of the software system. It is recommended that the white boxing is performed at the unit level testing phase.
White box testing requires the knowledge of insecurities and vulnerabilities and strengths of a program.
- The first step in white box testing includes analyzing and comprehensing the software documentation, software artifacts and the source code.
- The second step of white box testing requires the tester to think like an attacker i.e., in what ways he/ she can exploit and damage the software system.
He/ she needs to think of ways to exploit the system of the software.
- The third step of white boxing testing techniques are implemented.
These three steps need to be carried out in harmony with each other. Other wise, the white box testing would not be successful.
White box testing is used to verify the source code. For carrying out white box testing one requires full knowledge of the logic and structure of the code of the system software. Using white box testing one can develop test cases that implement logical decisions, paths through a unit, operate loops as specified and ensure validity of the internal structure of the software system.
Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional | Search-Based Testing: Automating White-Box Testing | Software Testing Interview Questions You'll Most Likely Be Asked |
Posted by
Sunflower
at
11/22/2011 01:59:00 PM
0
comments
Labels: Application, Code, Data, Design, Errors, Exceptions, Functionality, Inputs, Methods, program, Software testing, Source, Structures, Techniques, Test cases, White box testing
![]() | Subscribe by Email |
|
Multiple character constants can be dealt with in 2 ways in C. if enclosed in single quotes, these are treated as character constants and if enclosed in double quotes, these are treated as string literals. A string literal is a sequence of characters surrounded by double quotes. Each string literal is automatically added with a terminating character ‘\0’. Thus, the string “abc” will actually be represented as follows:
“ abc \0” in the memory and its size is not 3 but 4 characters ( inclusive of terminator character ).
Arrays refer to a named list of a finite number n of similar data structure elements. Each of the data elements can be referenced respectively by a set of consecutive numbers, usually 0, 1, 2, 3, ……., n. if the name of an array of 10 elements is ARR, then its elements will be referred as shown below:
ARR [ 0 ], ARR [ 1 ], ARR [ 2 ], ARR [3], …… ARR [9]
Arrays can be one dimensional, two dimensional or multi dimensional. The functions gets() and puts () are string functions. The gets() function accepts a string of characters entered at the keyboard and places them in the string variable mentioned with it. for example :
Char name[ 21 ];
The above code declares a string namely name which can store 20 valid characters ( width 21 specifies one extra character ‘\0’ with which a string is always terminated ). The function gets() reads a string of maximum 20 characters and stores it in a memory address pointed to by name. As soon as the carriage return is pressed, a null terminator ‘\0’ is automatically placed at the end of the string. The function puts () writes a string on the screen and advances the cursor to the newline. Any subsequent output will appear on the next line of the current output by puts ().
Arrays are a way to group a number of items into a larger unit. Arrays can have data items of simple types like int or float, or even of user defined types like structures and objects. An array can be of strings also. Strings are multi dimensional arrays comprised of elements, each of which is itself an array.
A string is nothing but an array of characters only. In actual C does not have a string data type rather it implements string as single dimension character arrays. Character arrays are terminated by a null character ‘\0’. So, for this reason the character arrays or strings are declared one character larger than the largest string they can hold.
Individual strings of the string array can be accessed easily using the index. The end of a string is determined by checking for null character. The size of the first index ( rows ) determines the number of strings and the size of the second index ( columns ) determines maximum length of each string. By just specifying the first index, an individual string can be accessed. You can declare and handle an array of strings just like a two dimensional array. See an example below:
Char name [10] [20] ;
Here the first dimension declares how many strings will be there in the array and the second dimension declares what will be the maximum length of a string. Unlike C++, C has some different functions for adding or concatenating strings, checking string length and to see the similarity of two strings. The functions are namely strlen, strcmp, strcat, strrev etc and are included in header file string.h. Strings are used for holding long inputs.
Posted by
Sunflower
at
10/08/2011 08:05:00 PM
1 comments
Labels: Array of strings, C Language, Characters, Code, Constants, Data, Dimensions, Elements, Function, Inputs, Memory, Outputs, Strings, Structure, Types, Values, Variables
![]() | Subscribe by Email |
|
Documentation is very necessary in quality assurance. Everything should be documented. User manuals, test plans, bug reports, business reports, code changes, specifications, design and all other reports should be documented. Any changes in the process should be documented.
A properly documented requirement specification is very necessary. Requirements are the details of what is to be done. Requirements should be clear, complete, detailed, testable. Details should be determined and organized in an efficient way but it can be difficult to handle. Some type of documentation with detailed requirements is very important to properly plan and execute tests.
There are some steps that are needed to develop and run software tests:
- The requirements, design specifications are necessary.
- Budget and cost should be known.
- What people will be responsible, responsibilities, standards and processes should be listed.
- Risk aspects should be determined.
- Test approaches should be defined.
- Test environment should be defined.
- Tasks should be identified.
- Inputs should be determined.
- Test plan document should be prepared.
- Test cases should be written.
- Test environment and test ware should be prepared.
- Tests are performed and results are evaluated.
- Problems are tracked, re-testing is done and test plans are maintained and updated.
Posted by
Sunflower
at
5/15/2011 12:43:00 PM
0
comments
Labels: Design, Documentation, Errors, Inputs, Problems, Quality, Quality assurance, Requirements, Software testing, Specification, Steps, Test cases, Test Plans
![]() | Subscribe by Email |
|
For complete testing of a software product both black and white box testing are necessary.
Black-box testing
This testing looks at the available inputs for an application and the expected outputs that should result from each input. It does not have any relation with the inner workings of the application, the process undertaken or any other internal aspect of the application. Search engine is a very good example of a black box system. We enter the text that we want to search, by pressing “search” we get the results. Here we are not aware of the actual process that has been implemented to get the results. We simply provide the input and get the results.
White-box testing
This testing looks into the complex inner working of the application; it tests the processes undertaken and other internal aspects of the application. While black box testing is mainly concerned with the inputs and outputs of the application, white box testing help us to see beyond i.e. inside the application. White-box testing requires a degree of sophistication which is not the case with the black-box testing, as the tester is required to interact with the objects that are used to develop an application rather than having easy access to the user interface. In-circuit testing is a good example of a white-box system testing where the tester is looking at the interconnections between different components of the application and verifying the proper functioning of each internal connection. We can also consider the example of an auto-mechanic who takes care of the inner workings of a vehicle to ensure that all the components are working correctly to ensure the proper functioning of the vehicle.
The basic difference between black-box and white-box testing is the areas of focus which they choose. We can simply say that black-box testing is focused on results. Where if an action is performed and the desired result is obtained then the process that has actually been used is irrelevant. White-box testing, on the other hand focuses on the internal working of an application and it is considered to be complete only when all the components are tested for proper functioning.
Advantages of Black-box testing
- Since tester does not have to focus on the inner working of an application, creating test cases is easier.
- Test case development is faster as tester need not to spend time on identifying the inner processes; his only focus is on the various paths that a user may take through GUI.
- It is simple to use as it focuses only on valid and invalid inputs and ensures that correct outputs are obtained.
Drawbacks of Black-box testing
Constantly changing GUI makes script maintenance difficult as the input may also be changing. Interacting with GUI may result in making the test script fragile and it may not properly execute consistently.
Advantages of White-box testing
- Since the focus is on the inner working the tester can identify objects pro grammatically. This can be useful when the GUI is frequently changing.
- It can improve stability and re usability of test cases provided the object of an application remains the same.
- By testing each path completely it is possible for a tester to achieve thoroughness.
Drawbacks of White-box testing
Developing test cases for white-box testing involves high degree of complexity therefore it requires highly skilled people to develop the test cases. Although to a great extent fragility is overcome in white-box testing BUT change in the objects name may lead to breaking of the test script.
Posted by
Sunflower
at
4/29/2011 11:53:00 AM
0
comments
Labels: Advantages, Application, Black box testing, Components, Disadvantages, Focus areas, Inputs, Internal, Outputs, Product, Quality, Results, Software, Software testing, Test cases, White box testing
![]() | Subscribe by Email |
|
Model View Controller design pattern is used to support multiple types of users with multiple types of interfaces. The Model-View-Controller (MVC) pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes:
- Model: The model locates all data information for the application. It does not care about how you interpret it or how process it. It divides functionality among objects involved in maintaining and presenting data to minimize the degree of coupling between the objects.
- View: The view manages the display of information. It is responsible for maintaining the consistency in its presentation when the underlying model changes.
- Controller: The controller interprets the mouse and keyboard inputs from the user, informing the model and/or the view to change as appropriate. The actions performed on the model can be activating device, business process or changing the state of a model.
The MVC pattern allows any number of controllers to modify the same model. The strategies by which MVC can be implemented are as follows:
- For Web-based clients such as browsers, use Java Server Pages (JSP) to render the view, Servlet as the controller, and Enterprise JavaBeans (EJB) components as the model.
- For Centralized controller, a main servlet is used to make control more manageable.
Posted by
Sunflower
at
4/16/2011 03:14:00 PM
0
comments
Labels: Controller, Data, Design Pattern, Devices, Doamin, Information, Inputs, Interfaces, Model View Controller, Multiple, MVC, Outputs, Pattern, View
![]() | Subscribe by Email |
|
System is an:
- integrated set of inter-operable elements;
- it consists of group of entities or components;
- interacting together to form specific inter-relationships;
- organized by means of structure;
- working together to achieve a common goal.
In defining the system boundaries,a software engineer discovers the following:
- entities or group of entities that are related and organized in some way within the system, either they provide input, do activities or receive output;
- activities or actions that must be performed by the entities or group of entities in order to achieve the purpose of the system;
- a list of inputs;
- a list of outputs.
Entities that are involved in this system are the applicant, club staff and coach.
General Principles of Systems
- The more specialized a system, the less it is able to adapt to different circumstances.
- The larger the system is, the more resources must be devoted to its everyday maintenance
- Systems are always part of larger systems, and they can always bepartitioned into smaller systems
There are two types of systems, namely, man-made systems and automated systems. Man made systems will always have areas for correctness and improvements. These areas for correctness and improvements can be addressed by automated systems. Automated systems consists of computer hardware, computer software, people, procedures, data and information and the connectivity that allows the connection of one computer systemwith another computer system.
Posted by
Sunflower
at
3/31/2011 12:44:00 PM
0
comments
Labels: activities, Automated Systems, Boundaries, Components, Elements, Entity, Goals, Inputs, Integration, Interaction, Outputs, Principles, Purpose, Relationships, Resources, Specialized, System
![]() | Subscribe by Email |
|
The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories:
- Quantification of risks :
Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is traditional quality assurance(QA) type test. The load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk.
- Determination of minimum configuration : Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations, so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.
Posted by
Sunflower
at
12/13/2010 01:46:00 PM
0
comments
Labels: Bugs, Changes, Database design, Defects, End to End, Errors, Inputs, Load, Load Test, Load Testing, Output, Performance, Performance testing, Production, Project, Tests
![]() | Subscribe by Email |
|
Load Tests are end to end performance tests under anticipated production load. The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations. Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. An important variation of the load test is the network sensitivity test which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN.
Load tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate 'usage profiles' that can be used as part of the testing process, and can even be used to drive large portions of load test.
Load testing must be executed on today's production size database, and optionally with a projected database. If some database tables will be much larger in some months time, then load testing should also be performed against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed service level agreements.
Posted by
Sunflower
at
12/13/2010 01:22:00 PM
0
comments
Labels: Bugs, Changes, Database design, Defects, End to End, Errors, Inputs, Load, Load Test, Load Testing, Output, Performance, Performance testing, Production, Project, Tests
![]() | Subscribe by Email |
|
UNIT TEST CASES(UTC)
The unit test cases are very specific to a particular unit. The basic functionality of the unit is to be understood based on the requirements and the design documents. Generally, design document will provide a lot of information about the functionality of a unit. The design document has to be referred before a unit test case is written because it provides the actual functionality of how the system must behave, for given inputs.
INTEGRATION TEST CASES
Before designing the integration test cases the testers should go through the integration test plan. It will give complete idea of how to write integration test cases. The main aim of integration test cases is that it tests the multiple modules together. By executing these test cases the user can find out the errors in the interfaces between the modules.
The tester has to execute unit and integration test cases after coding.
SYSTEM TEST CASES
the system test cases meant to test the system as per the requirements; end to end. This is basically to make sure that the application works as per the software requirement specification. In system test cases, the testers are supposed to act as an end user. so, system test cases normally do concentrate on the functionality of the system, inputs are fed through the system and each and every check is performed using the system itself. Normally, the verifications are done by checking the database tables directly or running programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program units. When it comes to system testing, it is assumed that the interfaces
between the modules are working fine.
Ideally the test cases are nothing but a union of the functionalities tested in the unit testing and the integration testing. Instead of testing the system inputs and outputs through database or external programs, everything is tested through the system itself. In system testing, the tester will mimic as an end user and hence checks the application through its output.
Sometimes, some of the integration and unit test cases are repeated in system testing also especially when the units are tested with test stubs before and not actually tested with other real modules, during system testing those cases will be performed again with real modules.
Posted by
Sunflower
at
12/10/2010 02:07:00 PM
0
comments
Labels: Application, Design, Expected, Inputs, Integration test case, Outputs, Process, Results, Software testing, Specification, Steps, System test case, Techniques, Test cases, Unit Test case, UTC
![]() | Subscribe by Email |
|
The test cases will have a generic format as below:
- Test Case ID : The test case id must be unique across the application.
- Test case description : The test case description should be very brief.
- Test Prerequisite : The test pre-requisite clearly describes what should be present in the system, before the test executes.
- Test Inputs : The test input is nothing but the test data that is prepared to be fed to the system.
- Test Steps : The test steps are the step-by-step instructions on how to carry out the test.
- Expected Results : The expected results are the ones that say what the system must give as output or how the system must react based on the test steps.
- Actual results : The actual results are the ones that say outputs of the action for the given inputs or how the system reacts for the given inputs.
- Pass/Fail : If the expected and actual results are same then test id Pass otherwise Fail.
The test cases are classified into positive and negative test cases.Positive test cases are designed to prove that the system accepts the valid inputs and then process them correctly. Suitable techniques to design the positive test cases are specification derived tests. The negative test cases are designed to prove that the system rejects invalid inputs and does not process them. Suitable techniques to design the negative test cases are error guessing, boundary value analysis, internal boundary value testing and state transition testing. The test cases details must be very clearly specified, so that a new person can go through the test cases step by step and is able to execute it.
In an online shopping application, at the user interface level, the client request the web server to display the product details by giving email id and username. The web server processes the request and will give the response. For this application, we design the unit, integration and system test cases.
Posted by
Sunflower
at
12/10/2010 01:08:00 PM
0
comments
Labels: Actual, Application, Design, Expected, Format, General, Inputs, Negative, Outputs, Positive, Process, Results, Software testing, Specification, Steps, Techniques, Test cases, Test ware development
![]() | Subscribe by Email |
|
Step 4: Call Sequencing
When combinations of possible arguments to each individual call are unmanageable, the number of possible call sequences is infinite. Parameter selection and combination issues further complicate the problem call-sequencing problem. Faults caused by improper call sequences tend to give rise to some of the most dangerous problems in software. Most security vulnerabilities are caused by the execution of some such seemingly improbable sequences.
Step 5: Observe the output
The outcome of an execution of an API depends upon the behavior of that API, the test condition and the environment. The outcome of an API can be at different ways i.e. some could generally return certain data or status but for some of the APIs. It might not return or shall be just waiting for a period of time, triggering another event, modifying certain resource and so on.
The tester should be aware of the output that needs to be expected for the API under test. The outputs returned for various input values like valid/invalid, boundary values etc needs to be observed and analyzed to validate if they are as per the functionality. All the error codes returned and exceptions returned for all the input combinations should be evaluated.
Posted by
Sunflower
at
11/29/2010 03:33:00 PM
0
comments
Labels: API, Application, Application Interface, Application Programming Interface, Call Sequencing, Calls, Inputs, Outcome, Outputs, Parameters, Software, Software testing
![]() | Subscribe by Email |
|
Some of the formal approaches used for exploratory testing are:
- Identify the break points
Break points are the situations where the system starts behaving abnormally. It does not give the output it is supposed to give. So, by identifying such situations also, testing can be done. Use boundary values or invariance for finding the break points of the application. In most of the cases, it is observed that system would work for normal inputs and outputs. Try to give input that might be the ideal situation or the worse situation. By trying to identify the extreme conditions or the breakpoints would help the tester to uncover the hidden bugs. Such cases might not be covered in the normal scripted testing. hence, this helps in finding the bugs which might not be covered in normal testing.
- Check the UI against Windows Interface etc standards
The exploratory testing can be performed by identifying the user interface standards. There are set standards laid down for the user interfaces that need to be developed. These user standards are nothing but the look and feel aspects of the interfaces, the user interacts with. The user should be comfortable with any of the screens that he or she is working on. These aspects help the end user to accept the system faster. By identifying the user standards, define an approach to test because the application developed should be user friendly for the user's usage.
- Identify expected results
The tester should know what he is testing for and expected output for the given input. Until and unless, the aim of the testing is not known, there is no use of the testing that is done because the tester may not succeed in distinguishing the real error and normal work-flow. The tester needs to analyze what is the expected output for the scenario he is testing.
- Identify the interfaces with other interfaces/external applications
In the age of component development and maximum re-usability, developers try to pick up the already developed components and integrate them. In some cases, it would help the tester explore the areas where the components are coupled. The output of one component should be correctly sent to other component. Hence, such scenarios or work-flows need to be identified and explored more. There may be external interfaces, like the application is integrated with another application for the data. In such cases, focus should be more on the interface between the two applications.
Posted by
Sunflower
at
11/09/2010 11:51:00 AM
0
comments
Labels: Approaches, Break points, Bugs, Defects, Exploratory testing, Formal Approaches, Inputs, Outputs, Performance, Quality, Software testing, User Interface
![]() | Subscribe by Email |
|
A software prototype stage when the software is first available for run. The software has the core functionalities in it but complete functionality is not aimed at. It would be able to accept inputs and give outputs. Usually, the most used functionalities are developed more. This test is conducted at the developer's site only. In a software development life cycle, depending on the functionalities, the number of alpha phases required are laid down in the project plan itself.
During this, the testing is not a through one since only the prototype of the software is available. The basic installation and un-installation tests and the completed core functionalities are tested.
The aim of alpha testing is :
- to identify any serious errors.
- to judge if the intended functionalities are implemented.
- to provide to the customer the feel of the software.
A thorough understanding of the product is done now. During this phase, the test plan and test cases for the beta phase which is the next stage is created. The errors reported are documented internally for the testers and developers reference. No issues are usually reported and recorded in any of the defect management or bug trackers.
The role of the test lead is to understand the system requirements completely and to initiate the preparation of test plan for the beta phase. The role of the tester is to provide input while there is still time to make significant changes as the design evolves and to report errors to developers.
Posted by
Sunflower
at
10/29/2010 10:42:00 AM
0
comments
Labels: Aim, Alpha, Alpha testing, Applications, Defects, Errors, Functionality, Goals, Inputs, Outputs, Phases, Quality, Software, Software testing, Validation, Validation Phase
![]() | Subscribe by Email |
|
Preparing a Unit test case document commonly referred as UTC is an important task in unit testing activity. Having a complete UTC with every possible test case leads to complete unit testing and thus gives an assurance of defect free unit at the end of unit testing stage.
While preparing unit test cases the following aspects should be kept in mind-
Expected functionality
Write test cases against each functionality that is expected to be provided from the unit being developed. It is important that user requirements should be traceable to functional specifications which should be traceable to program specifications which should be traceable to unit test cases. Maintaining such traceability ensures that the application fulfills user requirements.
Input Values
- Write test cases for each of the inputs accepted by the unit. Every input has certain validation rule associated with it. Write test cases to validate this rule.
- There can be cross-field validations in which one field is enabled depending upon input of another field. Test cases for these should not be missed.
- Write test cases for the minimum and maximum values of input.
- Variables that hold data have their value limits. In case of computed fields, it is very important to write test cases to arrive at an upper limit value of the variables.
- Write test cases to check the arithmetic expressions with all possible combinations of values.
Output Values
- Write test cases to generate scenarios which will produce all types of output values that are expected from unit.
Screen Layout
Screen/report layout must be tested against the requirements. It should ensure that pages and screens are consistent.
Path Coverage
A unit may have conditional processing which results in various paths, the control can traverse through. Test cases must be written for each of these paths.
Assumptions and Transactions
A unit may assume certain things for it to function. Test cases must be written to check that the unit reports error if such assumptions are not met.
In case of database applications, test cases should be written to ensure that transactions are properly designed and in no way inconsistent data gets saved in the database.
Abnormal terminations and Error messages
Test cases should be written to test the behavior of unit in case of abnormal termination.
Error messages should be short, precise and self explanatory. They should be properly phrased and free of grammatical mistakes.
Posted by
Sunflower
at
10/14/2010 01:50:00 PM
0
comments
Labels: Conditions, Coverage, Functionality, Inputs, Layout, Outputs, Paths, Phase, Phases, Report, Screen, Test cases, Unit, Unit testing, Validation, Values
![]() | Subscribe by Email |
|
Walkthrough is a static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
the objectives of Walkthrough can be summarized as follows:
- Detect the errors early.
- Train and exchange technical information among project teams which participate in the walkthrough.
- Increase the quality of the project, thereby improving morale of the team members.
The participants in walkthroughs assume the role of a walk-through leader, recorder, author and a team member.
To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the tam members. the walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.
Input to the walk-through includes:
- A statement of objectives for the walk-through.
- The software product being examined.
- Standards that are in effect for the acquisition, supply, development, operation and/or maintenance of the software product.
- Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.
- Anomaly categories.
The walk-through shall be considered complete when
- The entire software product has been examined.
- Recommendations and required actions have been recorded.
- The walk-through output has been completed.
Posted by
Sunflower
at
10/03/2010 11:58:00 AM
0
comments
Labels: Analysis, Code, Inputs, Programmer, Software, Software testing, Strategies, Strategy, Technical Reviews, Verification, Walkthroughs
![]() | Subscribe by Email |
|