Subscribe by Email


Showing posts with label Create. Show all posts
Showing posts with label Create. Show all posts

Sunday, June 23, 2013

Explain the various File Operations

A number of operations can be carried out on a file. However, there are 6 basic file operations. As we know a file is an ADT or abstract data type. Certain operations need to be considered for defining a file. Operating systems makes calls to these operations. 

Following are the six basic operations:

1. Creation of a file: 
- This operation involves two steps. 
- Firstly, a sufficient space has to be found for storing the file in the file system. - Secondly, for this new file an entry should be made in the directory.

2. Writing to a file: 
- For writing data to a file, a system call has to be made with name and the data to be written as its arguments. 
- A writer pointer is kept by the system at the location in the file where the next write operation is to be carried out. 
- This pointer is updated whenever a write operation occurs.

3. Reading from a file: 
- Just like the write operation, in order to read information from a file, a system call has to be generated along with the name of the file and the location of the content to be read as its arguments. 
- Here, instead of a write pointer there is a read pointer that will indicate the location where the next read operation is to take place. 
- The location at which the current operation is being carried out is kept as a “per – process current – file – position” pointer since the process is either writing to or reading from the file. 
- The same pointer can be used by both the read and write operations in order to reduce the complexity of the system as well as for saving space.

4. Re-positioning within a file: 
- System carries out search in the directory looking for the appropriate entry. 
When found, the current file position pointer is re-pointed to this position. 
This file operation does not require carrying out any input or output operation in actual. 
- Another name for this file operation is the file seek.

5. Deletion of a file: 
- For deletion of the file, the system searches through the directory to find the appropriate entry. 
- When found, the space held by this file is released and the entry in the directory is destroyed so that this space can be reused by other files.  

6. Truncating a file: 
- Sometimes you may require deleting only the contents of a file while keeping it attributes. 
- Deleting the file and recreating it is not an efficient solution. 
- This file operation lets you to erase the contents of the file but save its attributes.
- But here the length attribute of the file will be changed to zero after truncation. 
- The file space is released after truncating.


The above mentioned six basic file operations constitute the minimal file operations set. These operations are primary ones and if combined can perform some other secondary file operations such as copying. A table known as the open file table is maintained by the operating system that stores all the information about the files that are currently open. When the file is closed, its entry is deleted from the open file table. Some files have to be opened explicitly with the function open() before using it. The name of the file is passed as an argument to this function. Then it looks in the directory for this file and an entry is made in the open file table.  Each file has some access rights. It is in these access modes that a process uses the file. A process can perform only those operations which are permitted by the access rights of the file. 


Wednesday, May 15, 2013

What is the Process Control Block? What are its fields?


The task controlling block, switch frame or task struct are the names of one and the same thing that we commonly called as the PCB or the process control block. 
This data structure belongs to the kernel of the operating system and consists of the information that is required for managing a specific process. 
- The process control block is responsible for manifesting the processes in the operating system. 
- The operating system needs to be regularly informed about resources’ and processes’ statuses since managing the resources of the computer system for the processes is a part of its purpose. 
- The common approach to this issue is the creation and updating of the status table for every process and resource and objects which are relevant such as the files, I/O devices and so on:
1.  Memory tables are one such example as they consist of information regarding how the main memory and the virtual or the secondary memory has been allocated to each of the processes. It may also contain the authorization attributes given to each process for accessing the shared memory areas.
2.   I/O tables are another such example of the tables. The entries in these tables state about the availability of the device required for the process or of what has been assigned to the process. the status of the I/O operations taking place is also mentioned here along with address of the memory buffers they are using.
3.   Then we have the file tables that contain the information regarding the status of the files and their locations in memory.
4. Lastly, we have the process tables for storing the data that the operating systems require for the management of the processes. The main memory contains at least a part of the process control block even though its configuration and location keeps on varying with the operating system and the techniques it uses for memory management.
- Physical manifestation of a process consists of program data areas both dynamic and static, instructions, task management info etc. and this is what that actually forms the process control block. 
- PCB has got a central role to play in process management. 
- Operating system utilities access and modify it such as memory utilities, performance monitoring utilities, resource access utilities and scheduling utilities etc. 
- The current state of the operating system is defined by the set of process control blocks. 
- It is in the terms of PCBs that the data structuring is carried out. 
- In today’s sophisticated operating systems that are capable of multi-tasking, many different types of data items are stored in process control block. 
- These are the data items that are necessary for efficient and proper process management. 
- Even though the details of the PCBs depend up on the system, the common parts can still be identified and classified in to the following three classes:
1.  Process identification data: This includes the unique identifier of the process that is usually a number. In multi-tasking systems it may consists of user group identifier, parent process identifier, user identifier and so on. These IDs are very much important since they let the OS cross check with the tables.
2.   Process state data: This information defines the process status when it is not being executed. This makes it easy for the operating system to resume the process from appropriate point later. Therefore, this data consists of CPU process status word, CPU general purpose registers, stack pointer, frame pointers and so on.
3.   Process control data: This includes process scheduling state, priority value and amount of time elapsed since its suspension. 


Tuesday, February 5, 2013

How test cases are generated using Testing Anywhere?


Testing anywhere software product helps you create the test cases using any of the 5 methods namely:
- Web recording
- Object recording
- Image recognition
- Smart recording and
- Editor

- While editor can be used for editing the test cases by advanced users, wizard is for those who do not have any programming skills. 
- Another facility is that the creation of the files that are executable. 
- These files are created in such a way that they can be executed on a machine that might be located in a remote location. 
- IT and business processes are created by the work flow designer and is managed by the same.
In this article we explain how the test cases can be generated using the testing anywhere product. 

How test cases are generated using Testing Anywhere?

- The testing anywhere product comes with a SMART recorder that one can use to create new tests. 
- Clicking on the ‘record’ button will drop down a list of options from which you have to select what type you are testing i.e., a web or windows application type. 
- Then you need to perform the activities on the system that you want to be recorded. 
- You can insert a checkpoint wherever you want by clicking on the check point button. 
When you are done with recording the tests click on the stop button. 
- You can then save these activities in a test by clicking on the save button. 
- You can playback these test scripts whenever you want and any number of times by clicking on the run button. 
- In testing anywhere product, it does not matter if any change occurs between the size and location of the window of the application while the recording or replaying is in progress. 
Testing anywhere uses a SMART automation technology that helps in automatically adjusting to those changes.
- Another advantage of using testing anywhere is that you can work up on any number of applications.
- You do not have to finish working up on one application before moving on to the next. 
You can very well switch between the applications. 
- Testing anywhere even works while your computer is locked. 
- It does so because of the auto log-in capability that is unique to this software. 
- With this, you can schedule the tests and they will be executed even when your system is locked. 
- If this capability is enabled and the system is locked, it will be unlocked by the testing anywhere, the test will be executed and the computer will be locked again. 
- Auto log-in capability lets you run the tests in stealth mode when the system is locked. To do this follow:
Properties à security à run this test in stealth mode

- The execution will then be hidden. 
- Further, if you want to lock the keyboard and mouse you can use the ‘disable mouse and keyboard for this test’ option. 
- If you want to stop the execution of a test just keep holding the escape key for some time. 
- Using the pause key on the keyboard you can pause the test while it is executing. 
Further clicking on the resume button you can continue with the execution. 
- Another thing that makes testing anywhere unique is that it can run the tests in background. 
- A number of advanced technologies lets you run the tests in background such as:
1.    Web recorder
2.    Object recorder
3.    And a number of other powerful actions
- However, there are some tests like those that are recorded through standard recorder. 
These tests require mouse and keyboard control and therefore cannot be executed in background. 
- There are few other exceptions also such as commands such as screenshots; image comparison etc. that cannot be run in background.


Thursday, November 1, 2012

What is Keyword driven testing? What are base requirements for keyword driven testing?


Table driven testing, action word testing or key word driven testing, whatever you may call it, it is one and the same thing – a software testing methodology that has been especially designed for automated testing.

However, the approach followed by this testing methodology is quite different and involves dividing the test creation process in to 2 distinct phases namely:
  1. Planning phase and the
  2. Implementation phase

What is Keyword Driven Testing?

- It has been specially designed for automated testing.
- It does not mean that it cannot be employed for carrying out manual testing. - It can be very well used for manual testing as well. 
- The biggest advantage provided by the automated tests is of re-usability and so all this has eased up the maintenance of the tests that were developed at an abstraction level that was too high. 
- To say in simple words, one or more atomic test steps together form a key word.
- The first phase i.e., the planning phase involves the preparation of the testing tools and test resources. 
- The second phase i.e., the implementation phase depends up on the frame work or tool and thus differs accordingly. 
- Often a frame work is implemented by the automation engineers which has key words like enter and check. 
- This makes it easy for the test designers who do not have any knowledge of programming, to design the test cases based on such key words, that have already been defined by the engineers in planning phase that has already been implemented. 
- Such designed test cases are executed via a driver. 
- The purpose of this driver is to read the key word and execute the corresponding code.
- There are other testing methodologies which put everything right in to the implementation phase instead of performing the test designing and engineering separately.
- In such a case, the test automation is only confined to the test design.
- There are some keywords that are created using the tools which had the necessary code already written for them such as edit and check. 
- This helps in cutting down the extra number of engineers in the process of test automation. 
- This makes the implementation of the keyword a part of the tool. 

Advantages of Key word Driven Testing

There are some advantages of key word driven testing as stated below:
  1. Concise test cases.
  2. Test cases are readable only by the stake holders.
  3. Easily modifiable test cases.
  4. Easy reuse of the existing keywords by the new test cases.
  5. Keywords can be re used simultaneously across multiple test cases.
  6. Independent of programming languages as well as specific tool.
  7. Labor gets divided.
  8. Less requirement of tool and programming skills.
  9. Lower domain skills required for keyword implementation.
  10. Layer abstraction.

Disadvantages of Keyword Driven Testing

1.   A longer time is required for marketing when compared to manual testing.
2.   Initially, high learning curve.

Base Requirements of Key word driven Testing

  1. Full separation of test development and test automation processes: The separation of these two processes is very much required for test automation since both of them have very different skill requirements. The fundamental idea behind this is that the testers should not be programmers. Testers should have the ability of defining test cases which can be implemented without having to bother about the underlying technology.
  2. The scope of the test cases must be clear and differentiated: The test cases must not deviate from their scope.
  3. Right level of abstraction must be used for writing the tests: Tests must be written at levels such as lower user interface, higher business level etc. 


Saturday, October 13, 2012

What are file types used in Silk Test?


Silk test consists of many file types which have been discussed in this article. There are various types of files that are used by the silk test in the process of automation testing. 
Each of the files is used for a specific purpose or we can say function such as certain files are required by the silk test for the creation and execution of the tests. 
Now we shall discuss the different file types available one by one:

  1. Test plan (.pln): This file type in silk test facilitates the creation of the test suites. But the test suites can be created lonely if this file type is combined with the test scripts.
Testone
          Script: test.t
          Test case: one
Testtwo
          Script: test.t
          Test case: second
Here in the above example the main test script file is test. t. on the other hand the ‘one’ and the ‘second’ are the names of the test cases in the main script file test.t. Whenever this file type is run the test cases one and second are automatically picked up by the main script file and executed.
  1. Test script (.t): This file type in the silk test is used while writing the actual test scripts. Whenever such a file is in run session the test one and test two test cases are executed by it in a predefined order and at the end the note pad application is closed.
  2. Frame file (.inc): The windows and the controls present in the AUT or application under test are defined by an abstraction layer. These windows as well as the controls are referenced later in .t files.
  3. Result file (.res): This file type consists of all the test run results which have the names of all the passed as well as failed tests along with their suitable descriptions. These file types may or may not contain log messages.
Leaving out the result file (.res) type, almost all the other file types are based up on text. Therefore, the editing of all those files can be edited either using the silk test IDE or by using a text editor.
In the silk test version that was released in the year of 2006, the files can be saved in either of the two formats namely:
  1. ANSI format or
  2. The UTF – 8 format
  1. Project (.vtp): This file type is saved with an extension of 'verify test project' and is used as storage for the locations as well as the names of the files that are currently in use by some project. With this file type there is another associated file type .ini extension since it is an initialization file.
  2. Data driven script (.g.t): This file type stores the data driven test cases which are use to pull out data from the data bases.
  3. Suite (.s): With the help of this file type the sequential execution of several test scripts is possible.  
  4. Text file (.txt): This is an ascii file type and can be used for the following purposes:
a)   For storing a data that can be used for driving a test.
b)   For printing a file in some another document.
c)   The test automation file is accompanied by this file type as a read me file.
d)   For transformation of a tab delimited plan in to a silk test plan.

Entire above discussed source files are compiled and stored as the ‘pseudo code object files’ at either of the times:
  1. When the files are loaded or
  2. When any change has occurred in the file types. 


Wednesday, September 12, 2012

What are Virtual Objects in Quick Test Professional?


- The concept of the virtual objects in quick test professional comes in to light only when a failure in the identification of the object is encountered or some error is generated like “object not found”. 
- This happens because even after recording the actions during the time of play back, the quick test professional is experiencing some difficulties in the process of recognition of the object and this causes the whole script to fail. 
- A certain kind of object has been created in quick test professional in order to resolve the issues of the object recognition which has been termed as the “virtual objects”. 
- In some cases it happens that the quick test professional is unable to recognize the area of the object and therefore some other wizard is used for mapping the area of the object called the “virtual object wizard”. 
- All the virtual objects that are created during the course of object recognition are stored in the virtual object manager. 
- After the quick test professional has learned about the virtual object, it can record on the actual object very well. 
- A virtual object can be created easily by going to the tools menu, then selecting the virtual object list and then finally clicking on the new virtual object option. 
- Even though the virtual objects are very helpful there are some points about virtual objects to be noted:
  1. It is not possible to use the object spy on a virtual object.
  2. Only recording operation can be performed on virtual objects.
  3. You cannot treat labels and scroll bars as virtual objects.
- For disabling the virtual objects mode, simply go to the tools drop down list, and then go to options, then general, then check the option which says “disable recognition of virtual objects while recording”. 
- Using virtual objects is just one way of handling the issues of object recognition. The other two are:
  1. Analog recording and
  2. Low level recording.
- Basically, the virtual objects help in the recognition of the objects that do behave like standard objects but still cannot be identified by the quick test professional. 
- Such objects are mapped to standard classes with the help of virtual object mapping wizard. 
- The user actions on the virtual objects are emulated by the quick test professional during the run session. 
- The virtual object is portrayed as a standard class object in the test results. 
The virtual object wizard allows you to select the standard object class to which the object has to be mapped and then the boundaries of the virtual object can be marked with the help of the cross hair pointer. 
- After being done with this, a test object can be selected to be the parent of the virtual object. 
- The group of all virtual objects stored under one descriptive name is termed as the virtual objects collection. 
- While using the virtual objects during a run session always make sure that the size and location of the application window are exactly same as they were in the recording mode. 
- If it is not taken care of the coordinates of the virtual object may vary affecting the success of the whole run session. 
- Another point to be noted is that it is not possible to insert any check points on a virtual object. 
- For performing an operation on the active screen on a virtual object it is required that you first record it, save its properties in the description in the object repository. 


Sunday, August 19, 2012

How do you program tests with Test Script language (TSL)?


The tests that are created by the winrunner are composed of the statements that have been coded in a special language by the mercury interactive called Test Script Language or TSL in short form. 
- The tests created can be further enhanced by using the TSL language.
- Remember, this feature which allows the modification of the test scripts via the TSL language is not available in the winrunner runtime.
- The generation of TSL statement takes place whenever a test is recorded by the winrunner. 
- Each line of the test generated by the winrunner is called a TSL statement and is as usual followed by a semi colon like in many other programming languages.
- Each and every TSL statement consists of some user input like mouse click or input from key board to be fed to the software system or application that is to be tested. 
- The length of the TSL statement varies from line to line of the test script. 
- TSL can be thought of more like the C programming language that has been designed for the creation of the test scripts. 
Some certain specific functions like those mentioned below are combined by the TSL:
  1. Variables
  2. Control flow statements
  3. User defined functions
  4. Arrays and so on.
- A recorded script can be further enhanced just by typing the programming elements in to the winrunner test window.
- However, the TSL proves to be quite useful when compared to C because in test script language there is no need of compilation of the written code. 
- The test is just written or recorded and immediately subjected to execution. 
Four types of functions constitute the TSL:
  1. Analog functions
  2. Context sensitive functions
  3. Customization functions and
  4. Standard functions.

How TSL can aid in the creation of tests?

 
- The tests developed using TSL are basically of descriptive nature and thus this type of programming is called the descriptive programming. 
- A logical name is assigned to an object the very moment it is added to the GUI map. 
- Once a GUI object makes its place in the GUI map, it becomes quite easy for you to add any TSL statements to it. 
- The logical name that is given to the object is as such that it gives a description of the GUI object. 
- During the execution of the tests, the objects are discovered based on their logical names itself but for the identification other property values that are stored in the map are used. 
- These logical names of the GUI objects can be further modified so that it becomes quite easy for you for the identification of the tests. 
- While programming with TSL, the statements that perform functions on objects can also be added without making a reference to the GUI map. 
- But doing that requires entering some more information regarding the description of the object so that it can be uniquely identified during the test run. 
- This is the essence of the descriptive programming using TSL. 
- While doing the programming, if you forget which values and properties of the object were to be identified you can take help of the GUI spy for viewing the current properties of the object as well as its values. 
- Stress conditions can also be created while programming with TSL which are designed for the determination of the limitation of the software system or application. 
- Stress conditions can be created by creating a loop that executes a block of TSL statements for number of times specified by the user.  


Friday, August 10, 2012

What are the benefits of creating multiple actions within any virtual user script?


It is one of the most frustrating things when your software system or application crashes once the whole software development life cycle or SDLC is over while the user tries to install it. 
You can just make out what impression it will have of your work and efforts on the user!
- Time wasted!
- Reputation spoiled!
- No benefits!

Nowadays to avoid such kind of situations and inconvenience to the users, a standard procedure of QA (quality assurance) has been defined to test the software systems and applications. 
In previous years, the task of software testing was thought to be an ultimate drudgery but now it has all been eased with the help of automated testing tools and one of them being the VU or virtual user. 
In this article we shall discuss about virtual user as well as the benefits one can have of creating multiple actions within any virtual user script. 

What is Virtual User?


- Emulating a real world human user becomes such an easy task with the help of virtual user which otherwise will take up a lot of time.
- A virtual user can almost perform most of the actions that are performed by a human user like:
  1. Clicking the mouse
  2. Hovering or pointing the mouse over some object on the screen.
  3. Typing the words and commands.
  4. Selecting some object.
- The system on which the virtual user software package is installed acts as the host computer and one more computer is paired with this one called the target system. 
- The responsibility of the host system is to control the other computers present in the network while the target computer is programmed to receive the instructions from the host computer. 
- In the environment of the virtual user, the software system or application takes up the task of compiling and running the scripts called virtual user scripts.

Benefits of creating multiple actions with virtual user script


Now coming to the benefits of creating multiple actions within a virtual user script:
  1. Creating multiple actions within a VU script becomes quite effective when employed in tests that are highly repetitive and consist of many variations.
  2. With the multiple action feature, more and more tasks can be performed which otherwise would be quite a tedious job to do for the human users. But for a system it is possible since it does not bother itself with tiredness, tediousness and thus proving itself as an ideal tester.
  3. Another benefit of the multiple actions in one virtual user script is that such tests can be created which can be run at almost every internal build of the software system or application under question in the stages like:
a)   Development stage
b)   Alpha stage
c)   Beta stage
d)   Final release stage

  1. With a virtual user script with multiple actions it can be easily verified that no functionality was broken by the code changes that were made to the earlier version of the software system or application.
  2. With the help of multiple actions containing virtual user scripts, an automatic bug verification mechanism can be set up by imitating the steps through which a bug can be reproduced.
  3. The automatic big verification test thus developed can be used to test each and every release of the software system or application thus ensuring that the bug has been fixed and there are no chances of it sneaking back in to the program code.
  4. These multiple actions scripts can be split in to single ones whenever required. 


Friday, April 13, 2012

What is a mock object? What are the reasons for use?

Perhaps most of us have heard about mock objects but do not know what it is actually! This article is dedicated to the mock objects and the reasons for which they are used.

First we will clear up the concept of mock object and then we will move on to the reasons for its use. Mock objects are a concept of the object oriented programming.

" Mock objects are defined as some simulated objects that have the ability to mimic the behavior of the real objects in a defined and controlled way. Mock objects are used by a programmer or developer to check another object by testing it against the mock object."

Reasons for use of mock objects



- Mock objects as we discussed above are able to perfectly simulate the behaviour of real or we can say non- mock object be it very simple or very complex.

- This property of the mock objects make them an essential and convenient tool for carrying out an effective unit testing which involves testing of the software program modules in isolation.

- It should not be mistaken that the mock objects are used in the place of the original non mock objects.

- They are used in their place only during the testing phase and that too when it is almost impossible and impractical to incorporate the real object in to the created unit tests.

- In some of the cases it is required the presence of some object for testing another one. In such cases the mock objects play a very important role.

Characteristics of Mock Object


Below mentioned are some characteristics which if are present in a non mock object, it is important that you use a mock object in its place:

1. If the object supplies non deterministic results such as the current temperature and current time etc.

2. If the object is having states that are somewhat difficult to be reproduced and recreated such as a network error and so on.

3. If the object is too slow i.e., if it is having a complete data base which requires initializing before the starting of the test.

4. If the object does not exist in practical.

5. If the object tends to change its behavior.

6. If the object currently does not exists.

7. If the object requires including the methods and information regarding the testing purposes but not its actual tasks.

8. If the object is difficult to set up.

9. If the object is a user interface.

10. If the object cannot be queried.

We will give an example so that the mock object concept becomes clear to you!

- Consider your mobile alarm clock which causes an alarm bell to ring at the time that you specify.
- Suppose it gets the current time from somewhere outside, then how do you test this mobile alarm?
- You will have to wait till the alarm bell goes off till that specified time to check whether or not the mobile alarm is working correctly or not.
- Mock objects are sometimes used as mediators.
- In such cases a mock domain comes in to play rather than the actual domain.

Features of Mock Object


Below mentioned are some of the features of a mock object:
- Easy to create
- Easy to set up
- It is quick
- Deterministic
- Easy to cause behavior
- Does not have any direct user interface
- Can be queried directly

The mock objects did not come in to existence in a single day. It took a whole lot of days to experiment, discuss and collaborate and put forward the idea of using a mock object.


Wednesday, March 21, 2012

Data flow testing is a white box testing technique - Explain?

A program is said to be in active state whenever there is some data flow in the program. Without having the data flowing around the whole program, it would not have been possible for the software systems or application to do any thing.

Hence, we conclude that data flow is an extremely important aspect of any program since it is what that keeps a program going on. This data flow also needs to be tested like any other aspect of the software system or application and therefore, this whole article is dedicated to the cause of the data flow testing.

What is Data Flow Testing?

- Data flow testing technique has been categorized under the white box testing techniques since the tester needs to have an in depth knowledge of the whole software system or application.

- Data flow testing cannot be carried out without a control flow graph since without that graph the data flow testing won’t be able to explore any of the unreasonable or unexpected things i.e., anomalies that can influence the data of the software system or application.

- Taking these anomalies in to consideration, it helps in defining the strategies for the selection of the test paths that play a great role in filling up the gaps between the branch testing or statement testing and the complete path testing.

- Data flow testing implements a whole lot of testing strategies chosen in the above mentioned way for exploring the events regarding the use of the data that occurs in a sequential way.

- It is a way determining that whether or not every data object has been initialized before it used and whether or not all the data objects are used at least once during the execution of the program.

Classification of Data types
The data objects have been classified in to various types based up on their use:

- Defined, created and initialized data objects denoted by d.
- Killed, undefined and released data objects denoted by k.
- Used data objects in predicates, calculations etc, denoted by u.

Critical Elements for Data Flow Testing

- The critical elements for the data flow testing are the arrays and the pointers.

- These elements should not be under estimated since they may fail to include some DU pairs and also they should not be over estimated since then unfeasible test obligations might be introduced.

- The under estimation is preferable over the over estimation since over estimation is causes more expense to the organization.

- Data flow testing is also aimed at distinguishing between the important and not so important paths.

- During the data flow testing many a times pragmatic compromises are needed to make since there exist so many unpredictable properties and exponential blow up of the paths.

Anomaly Detection under Data Flow Testing

There are various types of anomaly detection that are carried under the data flow testing:

1. Static anomaly detection
This analysis is carried out on the source code of the software program without the actual execution.

2. Dynamic anomaly detection
This is just the opposite of the static testing i.e., it is carried out on a running program.

3. Anomaly detection via compilers
Such detection are possible due to the static analysis. Certain compilers like the optimizing compilers can even detect the dead variables. The static analysis itself is incapable of detecting the dead variables since they are unreachable and thus unsolvable in the general case.

Other factors:
There are several other factors that play a great role in the data flow testing and they are:
1. Data flow modelling based on control flow graph
2. Simple path segments
3. Loop free path segments
4. DU path segments
5. Def – use associations
6. Definition clear paths
7. Data flow testing strategies


Wednesday, December 8, 2010

What are Test Case Documents and how to design good test cases?

Designing good test cases is a complex art. The complexity comes from three sources:
- Test cases help us discover information. Different types of tests are more effective for different classes of information.
- Test cases can be good in a variety of ways. No test case will be good in all of them.
- People tend to create test cases according to certain testing styles, such as domain testing or risk based testing. Good domain tests are different from good risk based tests.
A test case specifies the pretest state of the IUT and its environment, the test inputs or conditions, and the expected result. The expected result specifies what the IUT should produce from the test inputs. The specification includes messages generated by the IUT, exceptions, returned values, and resultant state of the IUT and its environment. Test cases may also specify initial and resulting conditions for other objects that constitute the IUT and its environment.

A scenario is a hypothetical story, used to help a person think through a complex problem or system.

Characteristics of Good Scenarios


A scenario test has five key characteristics:
a story that is motivating, credible, complex and easy to evaluate.The primary objective of test case design is to derive a set of tests that have the highest attitude of discovering defects in the software. Test cases are designed based on the analysis of requirements. use cases, and technical specifications, and they should be developed in parallel with the software development effort.

A test case describes a set of actions to be performed and the results that are expected. A test case should target specific functionality or aim to exercise a valid path through a use case. This should include invalid user actions and illegal inputs that are not necessarily listed in the use case. A test case is described depends on several factors, e.g. the number of test cases, the frequency with which they change, the level of automation employed, the skill of the testers, the selected testing methodology, staff turnover, and risk.


Facebook activity