Subscribe by Email


Showing posts with label Path. Show all posts
Showing posts with label Path. Show all posts

Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Wednesday, August 7, 2013

Difference between adaptive and non - adaptive algorithms?

- Routing is the process of sending information from one point of network to another. 
- The originating point is called the source and the last point is called the destination. 
- Through the way a number of intermediate nodes might or might not be encountered. 
- Routing is sometimes compared with bridging. 
- Both of these accomplish the same purpose for the casual observer. But it is not so. 
- The basic difference between the two is that the routing is done at the layer 3 i.e., the network layer of the OSI model and the bridging takes place at the layer 2 i.e., data link layer of the OSI model. 
- Because of this distinction, the input supplied to the two processes is different and thus the task of path selection occurs in different ways. 
The routing algorithm is included as a part of the network layer software. 
- The primary responsibility of this software is to decide on which line the incoming traffic must be forwarded i.e., what will be the next node. 
- Certain metrics are used by the routing protocols for the evaluation of the path that is most appropriate for the transmission of a packet. 
- These metrics include reliability, path bandwidth, current load, delay and so on. 
- These metrics help in determining the optimal path towards a destination. 
Routing tables are created and maintained by the routing algorithms in order to aid the path determination process.
- The tables will contain what routing information is entirely based up on the routing algorithm that is being used. 
- The routing tables are filled by a variety of information by the routing algorithms. 
- If the internal subnet used is the datagram subnet, then for every datagram that arrives, a new decision has to be taken since the routes keep changing in this case after every transmission.
- On the other hand in virtual circuit subnet, all the decisions are taken with the setting up of the virtual circuit. 
- Once the connection or the links are established, the same path is followed by all the packets. 

The routing algorithms can be classified in to two major categories namely:
  1. Non – adaptive algorithms and
  2. Adaptive algorithms
- Another name for non – adaptive algorithms is the static algorithms. 
- Here the computation regarding the various routes is done in advance and the same routes are followed by all the packets. 
- The adaptive algorithms are better known as the dynamic algorithms. 
- In this type of algorithms, the routes are not computed in advance, rather the route is decided up on the arrival of a particular packet depending up on the traffic and the topology of the network. 

We have three different types of algorithms under the category of non – adaptive algorithms as mentioned below:
  1. Shortest path routing: This algorithm makes use of the Dijkstra’s algorithm for computing the shortest path where nodes and communication links are represented by vertices and edges of the graph respectively.
  2. Flooding: Here, the arriving data packet is transmitted on all the outgoing lines save the one on which it arrived. Its selective flooding variation is commonly used.
  3. Flow based routing: This algorithm takes in to consideration the present flow of the network before deciding on which line the packet must be transmitted.
And following are some of the adaptive algorithms:

  1. Distance vector routing: It requires knowledge about the whole network and is associated with the count  to infinity problem.
  2. Link state routing: It requires knowledge about neighborhood.
  3. Hierarchical routing: It is used for very large networks.
  4. Optimized link state routing: It is used for mobile hosts. 


Thursday, July 18, 2013

What is a routing algorithm in network layer?

About Routing
- The process of path selection in the network along which the data and the network traffic could be send is termed as routing. 
- Routing is a common process carried out in a number of networks such as the transportation networks, telephone networks (in circuit switching), electronic data networks (for example, internet). 
- The main purpose of routing is to direct the packet forwarding from source to its destination via the intermediate nodes. 
- These nodes are nothing but hardware devices namely gateways, bridges, switches, firewalls, routers and so on. 
- A general purpose system which does not have any of these specialized routing components can also participate in routing but only to a limited extent.

But how to know where the packets have to be routed? 
- This information about the source and the destination address is found in a table called the routing table which is stored in the memory of the routers. 
These tables store the records of routers to a number of destinations over the network. 
- Therefore, construction of the routing tables is also an important part of efficient routing process. 
- Routing algorithms are used to construct this table and for selecting the optimal path or route to a particular destination. 

- A majority of the routing algorithms are based on single path routing techniques while few others use multi-path routing techniques. 
- This allows for the use of other alternative paths if one is not available. 
- In some, the algorithm may discover equal or overlapping routes. 
- In such cases the following 3 basis are considered for deciding up on which route is to be used:
  1. Administrative distance: This basis is valid when different routing protocols are being used. It prefers a lower distance.
  2. Metric: This basis is valid when only one routing protocol is being used throughout the networks. It prefers a low cost route.
  3. Prefix-length: This basis does not depends on whether the same protocol is being used or there are many different protocols involved. It prefers the longer subnet masks.
Types of Routing Algorithms

Distance Vector Algorithms: 
- In these algorithms, the basic algorithm used is the “Bellman – Ford algorithm”. 
- In this approach, a cost number is assigned to all the links that exist between the nodes of a network.
- The information is send by the links from point A to point B through the route that results in the lowest total cost.
- The total cost is the sum of the costs of all the individual links in the route. 
The manner of operation of this algorithm is quite simple.
- It checks from its immediate neighboring nodes that can be reached with the minimum cost and proceeds.

Link-state Algorithms: 
- This algorithm works based up on the graphical map of the network which is supplied as input to it. 
- For producing this map, each of the nodes assembles the information regarding to which all nodes it can connect to in the network. 
- Then the router can itself determine which path has the lowest cost and proceed accordingly. 
- The path is selected using standard path selection algorithms such as the Dijkstra’s algorithm. 
- This algorithm results in a tree graph whose root is the current node. 
- This tree is then used for the construction of the routing tables.

Optimized link state Routing Algorithm: 
- This is the algorithm that has been optimized to be used in the mobile ad-hoc networks. 
- This algorithm is often abbreviated to OLSR (optimized link state routing). 
This algorithm is proactive and makes used of topology control messages for discovering and disseminating the information of the link’s state via mobile ad-hoc network. 


Saturday, June 23, 2012

What are limitations of smoke testing?


Smoke testing though being quite a helpful software testing methodology has got some limitations which will be discussed in this article. 
Smoke testing is an important software testing methodologies when it comes to the development of very large software projects. "The formal definition of the smoke testing states that it a quick and indeed a dirty software testing methodology that is deployed mainly for testing the major features and functionalities of a piece of software system or application". 

About Smoke Testing


- It is essential to perform smoke testing whenever any changes are implemented in the software system. 
- Originally, the smoke testing for software systems was adopted from the hardware industry. 
- Smoke testing is quite time and cost effective since it lays its primary focus on the components that have been changed recently in order to ensure continued compatibility.
- In addition to all this, a smoke test provides an effective means to confirm that the changed code works as desired and does not hampers the functioning of the whole build. 
- Following this approach, the bugs entering the software system can be immediately fixed. 
Smoke testing has many advantages that outweigh the limitations but still the limitations cannot be ignored.

Limitations of Smoke Testing


  1. Biggest limitation of the smoke testing is that its field of application and usefulness is quite narrow.
  2. Smoke testing can only be used when the time frame is small during the introduction of a new functionality or feature in to the software system or application.
  3. Though being wide, a smoke test is very shallow.
  4. Smoke testing does not take into consideration the fine details of the software system or application.
  5. Smoke tests cannot be substituted for actual functional tests.
  6. Smoke testing is a kind of black box testing and does not consider the internals of the software system or application.
  7. In smoke testing the tester cannot access the source code.
  8. In smoke testing the tester has to interact with the system via an interface by giving a variety of inputs and then examining the obtained outputs.
  9. Another limitation is that the path coverage provided by the smoke tests is very limited since all the given inputs have to be checked.
  10. Smoke tests cannot keep a control on targeting the paths and code segments that might be more error prone than the other segments.
  11. Smoke test are a bit difficult to design since they should be designed in such a way that they touch every part of the application software.
  12. Usually the smoke testing can be applied only when some new components are incorporated in to the existing software system or application.
More about Smoke Tests...
- Smoke tests can be thought of as preliminary tests that facilitate the further testing of the software system or application. 
- Though the smoke tests reveal simple failure, these failures are good enough to be the cause of the rejection of a prospective and deserving software project. 
- At last we conclude that there exists no such absolute best software testing methodology, each being having its own weakness as well as strength. 
- You cannot expect a testing methodology to fit the test requirements of every application software and conditions. 
- This reason only is enough to conclude that no testing methodology is most important. 


Friday, June 22, 2012

How is optimization of smoke testing done?


Smoke testing being one of the quick and dirty software testing methodologies, needs to be optimized. This article focuses on the need of optimizing the smoke tests and how to optimize them. 

Let us first see the scenario behind the need for optimization of the smoke tests that are carried out on the software system or application. 
- In most cases of the development of software systems or applications, the code can be executed either directly from the command prompt or via a sub- routine of the larger software systems or applications. This is called the command prompt approach. 
- The code is designed in such a way that it has the qualities like self awareness as well as it is autonomous. 
- By the code being self aware, we mean that if anything goes wrong during the execution, the code should explain it. 
- Commonly two types of problems are encountered during testing and have been mentioned below:
  1. The path was compiled with too much of optimization and
  2. The data directory is not pointed out properly by the path.

Steps for Optimization of Smoke tests


- In order to confirm that the code was compiled correctly one should run the test suite properly at least once. 
- The first step towards optimization of the smoke test is to run it and then examine the output. 
- There are two possibilities that either the code will pass the test or it won’t. 
- If it is the second case then there are two possibilities that where your smoke test went wrong:
  1. Compiler bugs and errors or
  2. The path has not been set properly.

Compiler Bugs and Errors


Let us take up the first possibility, i.e., the compiler bugs and errors. 
-  It is probable that the correct code might not have been produced by the compiler. 
- In some cases of the serious compiler bugs, it is possible that there might be some hardware exception and these kinds of errors are caused mainly by the compiler bugs. 
- In this case the optimization level of the code should be minimized. 
- After this the code should be recompiled and observed again. 
- Optimizing is good for code but if it is of aggressive kind then it will definitely cause problems.

If path is not set properly


- It is obvious that if the code is not able to trace its data files then it will definitely show up some error and this happens because the path has not been set properly. 
- In such cases you need to check which path was it, fix it and recompile the whole code and execute once again.

When does a system or an application crash?


Don’t think that the software system or application crashes only when there has aggressive optimization of the code! 
- Crashes also happens with those programs in which there is no optimization of the code. 
But in such cases, only compiler errors can be blamed since it happens only if the compiler has not been set up properly on your system. 
- If no program executes, it means that your compiler is broken and you need to talk about this to your system administrator.

How to optimize smoke tests?


- A lot of help comes from MPGO (managed profile guided optimization). 
- The best way to optimize any kind of testing is to maintain a balance between the automated and manual testing.
- You need to run MPGO tool along with the necessary parameters for the test and then run the test. The test now will be optimized. 
- It is actually the internal binaries that are optimized either fully or partially. 
- Partially optimized binaries are deployed only in automated smoke testing. 


Facebook activity