Subscribe by Email


Showing posts with label Records. Show all posts
Showing posts with label Records. Show all posts

Thursday, July 18, 2013

What is a routing algorithm in network layer?

About Routing
- The process of path selection in the network along which the data and the network traffic could be send is termed as routing. 
- Routing is a common process carried out in a number of networks such as the transportation networks, telephone networks (in circuit switching), electronic data networks (for example, internet). 
- The main purpose of routing is to direct the packet forwarding from source to its destination via the intermediate nodes. 
- These nodes are nothing but hardware devices namely gateways, bridges, switches, firewalls, routers and so on. 
- A general purpose system which does not have any of these specialized routing components can also participate in routing but only to a limited extent.

But how to know where the packets have to be routed? 
- This information about the source and the destination address is found in a table called the routing table which is stored in the memory of the routers. 
These tables store the records of routers to a number of destinations over the network. 
- Therefore, construction of the routing tables is also an important part of efficient routing process. 
- Routing algorithms are used to construct this table and for selecting the optimal path or route to a particular destination. 

- A majority of the routing algorithms are based on single path routing techniques while few others use multi-path routing techniques. 
- This allows for the use of other alternative paths if one is not available. 
- In some, the algorithm may discover equal or overlapping routes. 
- In such cases the following 3 basis are considered for deciding up on which route is to be used:
  1. Administrative distance: This basis is valid when different routing protocols are being used. It prefers a lower distance.
  2. Metric: This basis is valid when only one routing protocol is being used throughout the networks. It prefers a low cost route.
  3. Prefix-length: This basis does not depends on whether the same protocol is being used or there are many different protocols involved. It prefers the longer subnet masks.
Types of Routing Algorithms

Distance Vector Algorithms: 
- In these algorithms, the basic algorithm used is the “Bellman – Ford algorithm”. 
- In this approach, a cost number is assigned to all the links that exist between the nodes of a network.
- The information is send by the links from point A to point B through the route that results in the lowest total cost.
- The total cost is the sum of the costs of all the individual links in the route. 
The manner of operation of this algorithm is quite simple.
- It checks from its immediate neighboring nodes that can be reached with the minimum cost and proceeds.

Link-state Algorithms: 
- This algorithm works based up on the graphical map of the network which is supplied as input to it. 
- For producing this map, each of the nodes assembles the information regarding to which all nodes it can connect to in the network. 
- Then the router can itself determine which path has the lowest cost and proceed accordingly. 
- The path is selected using standard path selection algorithms such as the Dijkstra’s algorithm. 
- This algorithm results in a tree graph whose root is the current node. 
- This tree is then used for the construction of the routing tables.

Optimized link state Routing Algorithm: 
- This is the algorithm that has been optimized to be used in the mobile ad-hoc networks. 
- This algorithm is often abbreviated to OLSR (optimized link state routing). 
This algorithm is proactive and makes used of topology control messages for discovering and disseminating the information of the link’s state via mobile ad-hoc network. 


Sunday, June 2, 2013

Explain the various Disk Allocation methods? – Part 2

In this article we discuss about the non-contiguous disk allocation methods i.e., the linked allocation and the indexed allocation. 

What is a Linked Allocation?

- In linked allocation a single file might be stored all over the disk and these scattered parts are linked to each other just like a linked list. 
- Few bytes in the memory block are used for storing the address of the following linked block.
- This type of allocation has two major advantages mentioned below:
  1. Simplicity and
  2. Non – requirement of disk compaction
- Since the nature of the allocation method is non-contiguous, it does not lead to any external fragmentation of the disk space.
- And since all the memory blocks have been linked to each other, a memory block available anywhere in the memory can be used for satisfying the requests made by the processes. 
- Declaring the file size for the linked allocation is not required during its creation. 
- There are no issues even if the file continues to grow as long as free blocks are available and since the blocks can always be linked up. 
- As a consequence of all this, the need for disk compaction is eliminated. 

Disadvantages of Linked Allocation

But there are disadvantages of using linked allocation. They are:

Direct access to the disk blocks becomes slow: 
For finding a particular block of the file, the search has to begin at the starting point of the file and the successive pointers have to be followed till the destination block is reached.

Space required by the pointers: 
- Suppose out of 512 words of a memory block, 2 are required for storing the pointers, then we have 39 percent of the total disk being used by the pointers rather than for data. 
- This adds to the space requirement of the file blocks.

Reliability: 
- Because of all the blocks being linked via the pointers, even if one pointer gets damaged or wrong, the successive blocks can become inaccessible. 
- This problem is quite common and thus most of the operating systems avoid this problem by keeping redundant copies of these pointers in a special file. 
The basic idea here is to keep the list of the pointers in the physical memory of the system. 
- This also allows for the faster access to the disk blocks.

What is Indexed allocation?

- The linked allocation method does not provide support for the direct access and this problem is solved by the indexed allocation method. 
- In this method, all the pointers are placed over an index. 
- Thus, these pointers together form the index block. 
- The address of this address block is then stored in the directory. 
- The pointer at the nth number in the index block points to the nth block of the associated file. 
- The purpose of the index blocks is somewhat similar to that of the page map tables; however both of these are implemented in a different way.
- First level index is used for searching the index of second level and the second one is used for searching the third one and the process may continue till the fourth level. 
- But in most of the cases the indexes of the first two levels are sufficient. 
- With this method, second level index blocks (128 x 128) can be easily addressed and files of size of up to 8mb can be supported. 
- Assuming the same thing, files of size up to 1 gb can be addressed.

Advantages and Disadvantage of Indexed Allocation

- The major advantage of this allocation technique is that it does not give rise to external fragmentation and offers a high level of efficiency in random accessing. 
- Also, using this technique mapping can be done around the disk blocks that are known to be bad. 
-Bit mapping can be used for indexing the free space. 

The biggest disadvantage of this allocation technique is the large number disk accesses required for the retrieval of the address of the destination block in the memory. 


Saturday, June 1, 2013

Explain the various Disk Allocation methods? – Part 1

Management of space of the secondary data storage devices is one of the most necessary functions of the file system. This includes tracking which blocks of memory or disk are being allocated to the files and which blocks are free to be allocated. 
There are two main problems faced by the system during allocation of the space to different files. Firstly, the access to the files has to be fast and secondly, the disk space has to be effectively utilized. Both of these combine to form the big problem of disk management. These two problems are more common with the physical memory of the system. 

However, the secondary storage of the system also introduces 2 additional problems which are long disk access time and blocks of larger size to deal with. In spite of all these problems, there are considerations that are common for both the storage such as the non – contiguous and contiguous space allocation. 

Types of Disk Allocation Methods


The following are the 3 widely used allocation techniques:
Ø  Contiguous
Ø  Linked
Ø  Indexed
- The linked and the indexed allocation techniques fall under the category of the non-contiguous space allocation. 
- All the methods have their own pros and cons.
- It is in the term of blocks that all the input and output operations are carried out on the disk. 
- Software is responsible for converting from the logical record to physical storage blocks.

Contiguous Allocation: 
- This allocation technique assigns only the contiguous memory blocks to the files. 
- The size of the memory required is mentioned by the user in advance for the holding the file. 
- The file is then created only if that much amount of memory is available otherwise not. 
- It is the advantage of the contiguous memory allocation technique that all the successive file records are saved adjacent to each other physically. 
- This causes an increase on the disk access time of the files. 
- This can be concluded from the fact that if the files are scattered all about the disk, then it takes a lot more time to access them. 
- Accessing files when they have been organized in a proper order is quite easy.
- To access the files sequentially, the system uses the address of the last memory block and if required moves to the next one. 
- Both direct and sequential accesses are supported by the contiguous memory allocation technique. 
- The problem with this allocation method is that every time a new contiguous space cannot be found for the new files if a majority of the memory is already in use and if the file size is larger than the available memory.  


Non–Contiguous Memory Allocation: 
- It happens a lot of time that the files grow or shrink in size with usage and time. 
- Therefore, it becomes difficult to determine the exact space required for the files since the users do not have any advance knowledge about the growing size of their files. 
- This is why the systems using the contiguous memory allocation techniques are being replaced by the ones with the non-contiguous storage allocation which is more practical and dynamic in nature. 
- The linked allocation technique mentioned above in the article, is a disk – based implementation of the linked list. 
- Each file is considered to be a list of the disk blocks linked to each other. 
- It does not matter whether the blocks are scattered about the disk. 
- In each block, few bytes are reserved for storing the address of the next block in chain.
- The pointer to the first and the last block of the file is stored in the directory. 


Thursday, November 29, 2012

How to update and mail defects in Test Director?


Normal tracking of the defects requires too much efforts but tracking the repair of defects in a project just requires the periodic updates of the defects.
This can be done directly using either of the two:
  1. Defects grid and
  2. Defect details dialog box
However, the ability of both the methods to update some fields of the defects depends pretty much on the permission settings given by user. 

In this article we shall discuss how you can update the information of the defects by assigning different defects to different members of the development team, adding a comment and by changing the severity of the defects.

Steps for Updating Defects in test Director

  1. Make sure that you have the defects module on display and if it is not do so by clicking on the defects tab.
  2. Now, to update the defects directly using the defects grid, go to the defects grid and select the concerned defect that you also added using the ‘add new defects” dialog box. In order to assign the defect to a member, select the name of concerned member from the list by clicking on the ‘assigned to’ box in the defect  records.
  3. Next click on the defect details tab and it will open up defect details dialog box for you.
  4. In this defect details dialog box, do the following tasks:
a) Select the required severity from the severity box to change the severity level of the defect.
b)  Add a comment to explain the change in severity level if you wish so by clicking on the add comment button in the description menu.
  1. To view all the attachments click on the attachments button in the left menu and you will see the list of URL attachments.
  2. For viewing the history of the changes made to the defect click on the history button in the left menu. For every change made to the defect the date of the change, new value and name of the person who made the change is displayed by the test director.
  3. When you are done with everything click OK to exit this dialog box and save the changes.

Steps to mail defects in Test Director

- The details about a defect can be shared with another user via e – mail. 
- With this, a routine of the inform development  and quality assurance can be developed regarding the defect repair activity.
- A ‘go to defect’ link is included in the test director using which the user can go directly to the concerned defect. 
- Follow the below mentioned steps to mail a defect to the concerned person:
  1. First of all display the defects module by clicking on the defects tab.
  2. Next select the defect you want to mail and click on the mail defects button. This will open up another dialog box called ‘send mail’ dialog box.
  3. In this box you need to enter a valid e – mail address in the ‘To’ field.
  4. For including any attachments or history of the defects select attachments and history option from the include box.
  5. You can add your own comments under the additional comments.
  6. When you are done composing the e – mail click on the send button. You will get a message box. Click ok.
  7. The person to whom you have sent the mail can view it from his/ her mail box.
Even a test in the test plan can be associated with a specific defect in the defects grid. Whenever an association is created it can be determined whether the test is to be executed based up on the status of the defect. 


Tuesday, October 16, 2012

How Silk Test records user actions?


Software systems or applications as we all know are composed of a GUI or graphical user interface objects such as those mentioned below:
  1. Windows
  2. Menus
  3. Buttons and so on.
These GUI objects can be manipulated by the user via a mouse or a key board for initiating the operations on an application.
These GUI objects are interpreted by the silk test and are recognized in to different categories based up on the following aspects that uniquely identify them:
  1. Class
  2. Properties and
  3. Methods
While the testing is in progress, an interaction takes place among the silk test and the GUI objects so that the operations can be submitted to the application under testing or AUT automatically without much efforts and also because the actions of a user can be simulated. 
This is also done to verify the results of each and every operation. The silk test is said to be the simulated user which is then said to drive the application under test or AUT. 

The silk test comes with two distinct components namely:
  1. The silk test host software
  2. The silk test agent software
- The first component of the silk test i.e., the silk test host software is used for developing, editing, compiling, running and debugging of the test plans as well as the test scripts. 
- The machine on which this component is installed is called the host machine. - The latter component of the silk test i.e., the silk test agent is the one that interacts with the graphical user interface of the application under test. 
- The commands present in the 4test scripts of the application are translated in to GUI specific commands which help in driving and monitoring the application under test.
- It is possible to run the agent locally on the same machine on which the host is already running or else it can be run in a networked environment. 
- But in this case, the machine will be known as the remote machine. 

How the user actions are recorded by the silk test?

 
- A repository is created for storing the information regarding the application under test or AUT before the test scripts are created and executed. 
- This repository consists of the descriptions of the GUI objects that are responsible for compromising with your application under test. 
- The silk test associates with these objects by means of two things namely:
  1. Object properties and
  2. Object methods
- Using these two aspects the actions performed on the objects can be easily recognized by the silk test and can be intelligently recorded in to the test script using the 4test language. 

Below we are giving some examples of what is recorded by the silk test for some user actions:
-User action
  As recorded by the silk test
-Selection of a radio button from a group
  select
-Setting the main window as active
  Set Active
-Closing a dialog box
  close
-Selecting an item from a list box
  Select
-Scrolling the scroll bar to the maximum position possible
  Scroll to max
-Writing text in a text field
  Set text
-Checking a check box
  Check
-Picking a menu item
  pick
-Unchecking a check box
  uncheck

- A property is that characteristic of an object that can be accessed directly and may be common among some classes. 
- On the other hand, the methods constitute of the user actions and are particular to an object. 
- Methods that are particular to an object are inherited from the parent class of the object. 
- Methods are unique to the objects of a class. 


Wednesday, September 12, 2012

What are Virtual Objects in Quick Test Professional?


- The concept of the virtual objects in quick test professional comes in to light only when a failure in the identification of the object is encountered or some error is generated like “object not found”. 
- This happens because even after recording the actions during the time of play back, the quick test professional is experiencing some difficulties in the process of recognition of the object and this causes the whole script to fail. 
- A certain kind of object has been created in quick test professional in order to resolve the issues of the object recognition which has been termed as the “virtual objects”. 
- In some cases it happens that the quick test professional is unable to recognize the area of the object and therefore some other wizard is used for mapping the area of the object called the “virtual object wizard”. 
- All the virtual objects that are created during the course of object recognition are stored in the virtual object manager. 
- After the quick test professional has learned about the virtual object, it can record on the actual object very well. 
- A virtual object can be created easily by going to the tools menu, then selecting the virtual object list and then finally clicking on the new virtual object option. 
- Even though the virtual objects are very helpful there are some points about virtual objects to be noted:
  1. It is not possible to use the object spy on a virtual object.
  2. Only recording operation can be performed on virtual objects.
  3. You cannot treat labels and scroll bars as virtual objects.
- For disabling the virtual objects mode, simply go to the tools drop down list, and then go to options, then general, then check the option which says “disable recognition of virtual objects while recording”. 
- Using virtual objects is just one way of handling the issues of object recognition. The other two are:
  1. Analog recording and
  2. Low level recording.
- Basically, the virtual objects help in the recognition of the objects that do behave like standard objects but still cannot be identified by the quick test professional. 
- Such objects are mapped to standard classes with the help of virtual object mapping wizard. 
- The user actions on the virtual objects are emulated by the quick test professional during the run session. 
- The virtual object is portrayed as a standard class object in the test results. 
The virtual object wizard allows you to select the standard object class to which the object has to be mapped and then the boundaries of the virtual object can be marked with the help of the cross hair pointer. 
- After being done with this, a test object can be selected to be the parent of the virtual object. 
- The group of all virtual objects stored under one descriptive name is termed as the virtual objects collection. 
- While using the virtual objects during a run session always make sure that the size and location of the application window are exactly same as they were in the recording mode. 
- If it is not taken care of the coordinates of the virtual object may vary affecting the success of the whole run session. 
- Another point to be noted is that it is not possible to insert any check points on a virtual object. 
- For performing an operation on the active screen on a virtual object it is required that you first record it, save its properties in the description in the object repository. 


Sunday, August 26, 2012

Can you test database using WinRunner? What are all the different DB that WinRunner can support?


- The testing of the databases has been made possible using the winrunner with the help of the data base record check points. 
- These runtime data base record check points can be added to the test scripts and data being processed in to information in the application software during the execution of the test can be compared with the corresponding record present in the data base that you want to test. 
- With the help of these check points, it is possible to compare the contents of the data bases of the different versions of the application software.
- Whenever a data base check point is created, a query is defined in the data base of the application and the values contained in the result set are tested by the data base check point. 
- The query can be defined in any one of the following ways:
  1. By using Microsoft query (it can installed from Microsoft office’s custom installation).
  2. Manually by defining an ODBC query i.e., by creating an equivalent SQL statement.
  3. By using data junction for creating a conversion file.
- The data base check point is said to fail when no match is found between the comparison and success criteria that has been specified for that particular check point. 
- A successful run-time data base record check point can be defined as the one where one or more than one matches were found. 
- One major characteristic of these check points is that they can be used in loop as well. 
- Results for all the iterations of the loop are recorded as separate entities. 
- The run time data base record checkpoints can be added to the test in order to make comparisons between the current values in the data base and the information that is being displayed in the application software while a test is in the process of execution.
- The run time record checkpoint wizard can be used for the following purposes:
  1. Defining the query.
  2. Identification of the application controls containing relevant information.
  3. Defining the success criteria of the individual check points.
- While testing your data base you may land in some situations where you may require comparing the data files of different formats. 
- Following the below stated steps for creating a run time data base record check point manually:
  1. Record the application up till where you want the data to be verified on the screen.
  2. Calculate the expected values of the corresponding records in the data base.
  3. Add the expected values to an edit field.
  4. Using GUI map editor teach the winrunner about the controls of the application and the edit fields of the calculated values.
  5. Add TSL statements to the test script in order to calculate the expected data base values extract the values and write the extracted values to the corresponding edit fields.
- Before you run your tests do make sure that all the applications with the edit fields containing the calculated values are open. 
- Earlier we mentioned the term “ODBC” in the article, it is actually a technology which enables the computer to connect to a data base so that the information can be retrieved by the winrunner easily. 
- Those who developed winrunner did not knew what kind of data base the AUT uses so it becomes impossible for the winrunner to know the way to connect to the data base of your application. 
- To overcome this problem, the ODBC was developed which can be taught how to connect to the data base. 
- Thus, it is because of ODBC that almost all the data bases are supportable by the winrunner.


Wednesday, March 14, 2012

What are major activities in database testing?

Before going to the main topic i.e., the major activities that are carried out in data base testing, we will first have an insight on what is data base testing actually.

WHAT IS DATABASE TESTING?

- Data base testing as it suggests itself, it is the testing of the data or values retrieved from the data base of that software system or application under testing.

- The retrieved data should match exactly with the data mentioned in the records of the data base.

- Data base testing is not an easy thing to carry out.

- It calls for a great need of expertise in reading the data base record tables, writing procedures and queries for the data base.

- Data base testing works well with all sorts of application softwares be it any application written either in SQL or Oracle languages.

- But normally, data base testing finds it way in the testing of applications that work with all sorts of sensitive data like finance, banking or health insurances etc.

- Such applications require extensive data base testing since any error in the retrieved data can cause the users lot to suffer.

MAJOR ACTIVITIES OF DATABASE TESTING
Now we are going to discuss about the working of the data base testing or what all are the major activities taking place in a data base testing.

- A lot of understanding and knowledge of the software system or application under the testing is required i.e., the tester needs to know all about the type of data base being used by the software system or application.

- All the existing data tables in the application data base are figured out.

- All the possible queries are written for the figured out tables to be executed.

- All the tables are tested individually for the verification and validation of the data contained in them.

- For complex data bases the queries are obtained from the developer and the functionalities are tested.

- The data base of a software system or application is indeed its back bone and needs to be tested thoroughly.

- In a data base testing not only the data base undergoes testing, but also the features and functionality of the software system or application.

- As if this is not enough, all the actions taking place like deletion or addition are also tested.

- The added values or data are checked against the records of the data base i.e., whether or not they are exactly same.

- The deleted is checked for whether or not it has been really deleted from the data base.

- Every action being performed is tested for its efficiency as it will affect the overall well being of the data base.

- These days with the introduction of the business logic, the data bases have become more complex.

- Though the business logic makes the whole data base complex in nature, it cannot be neglected since it plays a very crucial role in the implementation of the applications.

- After the implementation of the business rules or logic, the data base values are again checked for their correctness.

- The coupling of the data bases to the libraries also poses a problem for the data base testing besides the following:

1. Data base schemas
2. Data base tables
3. Verification of the data base after every execution of test cases.
4. Cleaning up of the data base for every new test case execution.
5. Carrying out the whole data base testing manually is absolutely impossible or perhaps a nightmare.
6. Writing short test codes that are easy to understand.

One needs to carry out data base testing very carefully and with understanding since any faltering can disrupt the whole testing process.


Thursday, January 6, 2011

Volume tests - Volume Testing of Batch Processing Systems

Capacity drivers in batch processing systems are also critical as certain record types may require significant CPU processing, while other record types may invoke substantial database and disk activity. Some batch processes also contain substantial aggregation processing, and the mix of transactions can significantly impact the processing requirements of the aggregation phase.

In addition to the contents of any batch file, the total amount of processing effort may also depend on the size and makeup of the database that the batch process interacts with. Also, some details in the database may be used to validate batch records, so the test database must match test batch files.


Before conducting a meaningful test on a batch system, the following must be known :
- The capacity drivers for the batch records.
- The mix of batch records to be processed, grouped by capacity driver.
- Peak expected batch sizes (check end of month, quarter and year batch sizes).
- Similarity of production database and test database.
- Performance Requirements.


Batch runs can be analyzed and the capacity drivers can be identified, so that large batches can be generated for validation of processing within batch windows. Volume tests are also executed to ensure that the anticipated numbers of transactions are able to be processed and that they satisfy the stated performance requirements.


Friday, December 3, 2010

What comprises Test Ware Development : Test Strategy Continued...

Test ware development is the key role of the testing team. Test ware comprises of:

Test Strategy


Before starting any testing activities, the team lead will have to think a lot and arrive at a strategy.The following areas are addressed in the test strategy document:
- Test Groups: From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. We need to identify the test groups based on the functionality aspect.

- Test Priorities: Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released. Some other test cases may be treated like cosmetic and if they fail, we can release the product without much compromise on the functionality. This priority levels must be clearly stated.

- Test Status Collections and Reporting:
When test cases are executed, the test leader and the project manager must know where exactly we stand in terms of testing activities. To know where we stand, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed and how many failed etc. Also, how often we collect the status is to be clearly mentioned.

- Test Records Maintenance: When the test cases are executed, we need to keep track of the execution details like when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location.This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories.

- Requirements Traceability Matrix: Ideally, each software developed must satisfy the set of requirements completely. So, right from design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. In this matrix, the rows will have the requirements. For every document, there will be a separate column. So, in every cell, we need to state what section in HLD addressed in every single document, all the individual cells must have valid section ids or names filled in.

- Test Summary: The senior management may like to have a test summary on a weekly or monthly basis. If the project is very critical, they may need it on a daily basis also. It addresses what kind of test summary reports will be produced for the senior management along with the frequency.


Facebook activity