Subscribe by Email


Showing posts with label Congestion. Show all posts
Showing posts with label Congestion. Show all posts

Thursday, September 5, 2013

Explain the technique of admission control to control congestion in virtual circuit subnets?

- Virtual circuits can be thought of as a virtual channel in the telecommunication networks as well as computer networks. 
- Virtual circuit sub-nets represent a communication service that is connection oriented. 
- This service is used through the packet mode communication.
- A stream of data bytes can be exchanged between the two nodes only if a virtual circuit has been established between them. 
- Without the presence of the higher level protocols, data division has to be dealt with unnecessarily. 
- Therefore, the virtual circuits always allow the high level protocols. 
- There is a resemblance between the circuit switching mode and the virtual circuits because of the fact that both of them are based up on connection. 
- The packets that are transmitted through a virtual circuit sub-net consist of a circuit number and not the destination address. 
- This is why the memory requirement of the packets is less when using virtual circuits and more in the others. 
- This also makes the virtual circuit sub nets less expensive when compared to other sub nets. 

In this article we discuss about the technique for congestion control in virtual circuit sub nets. 
- One most popular technique is of the admission control. 
- Most of the congestion control methods are based up on an open loop i.e., the congestion is prevented rather than managing it after it has occurred. 
Admission control is a dynamic method for controlling the congestion problems in the virtual circuit sub nets. 
- Admission control technique has been widely accepted for preventing the congestion problem from getting worse over the time. 
- The technique is based up on a very simple idea which is that no virtual circuit is set up until and unless the congestion problem that was detected has been resolved. 
- Therefore, any attempt that is made for establishing a new virtual connection with the transport layer is failed. 
- The things get even worse if the access is granted to more and more people. 
Simplicity of the technique is one of its characteristic that makes it easy to be implemented. 
- The technique can be implemented in a straightforward manner. 
- The admission control technique is also used by the telephone systems for combating with the congestion problems. 
- The admission control technique is implemented whenever a switch in the network gets overloaded. 
- At this time no dial tone is heard. 
- Establishing new virtual connections represents another way of coping with this problem. 
- Here, the new connections have to be routed carefully such that there are no problems. 
Another method for eliminating the problem of congestion is to strike an agreement between the virtual circuit subnet and the host. 
- By this we mean establishing a new virtual circuit. 
- But this arrangement requires specifying how the traffic has to be shaped and what would be its volume, QoS (quality of service), other parameters and so on. 
- The virtual circuit sub net has to reserve the resources on its part of the agreement established. 
- These resources lie on the route where the virtual circuit has been established. 
- The resources might include the following:
Ø  Space in the router’s buffer.
Ø  Tables
Ø  Bandwidth of the lines and so on.
- The newly virtual circuits are less likely to experience the congestion problems. 
- This is because to them the availability of the resources has been guaranteed.
- Resources can be reserved in this way only if the existing sub nets are experiencing congestion problem or when the standard operating procedure is being followed. 
- One disadvantage of the admission control technique is that it leads to the wastage of the resources. 
- Also, sometime the bandwidth is left unused. 


Wednesday, September 4, 2013

What is a choke packet?

- The networks often experience problems with congestion and flow of the traffic. 
- While implementing flow control a special type of packet is used throughout the network. 
- This packet is known as the choke packet. 
- The congestion in the network is detected by the router when it measures the percentage of the buffers that are actually being used. 
- It also measures the utilization of the lines and average length of the queues. 
When the congestion is detected, the router transmits choke packets throughout the network. 
- These choke packets are meant for the data sources that are spread across the network and which have an association with the problem of congestion. 
These data sources in turn respond by cutting down on the amount of the data that they are transmitting. 
A choke packet has been found to be very useful in the maintenance tasks of the network. 
- It also helps in maintaining the quality to some extent. 
- In both of these tasks, it is used for informing the specific transmitters or the nodes that the traffic they are sending is resulting in congestion in the network. 
Thus, the transmitters or the nodes are forced to decrease the rate at which they are generating traffic. 
- The main purpose of the choke packets is controlling the congestion and maintaining flow control throughout the network. 
- The router directly addresses the source node, thus causing it to cut down its data transmission rate. 
- This is acknowledged by the source node by making reductions by some percentage in the transmission rates. 
- An example of the choke packet commonly used by the most of the routers is the source quench packet by ICMP (internet control message protocol).  
- The technique of using the choke packets for congestion control and recovery of the network involves the use of the routers. 
- The whole network is continuously monitored over by the routers for any abnormal activity.
- Factors such as the space in the buffers, queue lengths and the line utilization are checked by the routers. 
- In case the congestion occurs in the network, the choke packets are sent by the routers to the corresponding parts of the network instructing them to reduce the throughput. 
- The node that is the source of the congestion has to reduce its throughput rate by a certain percentage that depends on the size of the buffer, bandwidth that is available and the extent of the congestion. 
- Sending the choke packets is the way of routers telling the nodes to slow down so that the traffic can be fairly distributed over the nodes. 
- The advantage of using this technique is that it is dynamic in nature. 
The source node might send as much data as required while the network might inform that it is sending large amounts of traffic.
- The disadvantage is that it is difficult to know by what factor the node should reduce its throughput.
- The amount of the congestion being caused by this node and the capacity of the region in which congestion has occurred is responsible for deciding this. 
- In practical, this information is not instantly available. 
- Another disadvantage is that after the node has received the choke packet, it should be capable of rejecting the other choke packets for some time. 
- This is so because many additional choke packets might be generated during the transmission of the other packets. 

The question is for how long the node is supposed to ignore these packets? 
- This depends up on some dynamic factors such as the delay time. 
- Not all congestion problems are same, they vary over the network depending up on its topology and number of nodes it has. 


Monday, September 2, 2013

Application areas of leaky bucket algorithm and token bucket algorithm

In this article we discuss about the applications of the leaky bucket algorithm and the token bucket algorithm.  

Applications of Leaky Bucket Algorithm
- The leaky bucket algorithm is implemented in different versions. 
- For example, the generic cell rate algorithm is a version of this algorithm which is often implemented in the networks using ATM (asynchronous transfer mode).  
- The algorithm is applied at the user interfaces in the usage/network parameter control in order to provide protection to the network from the problems of congestive collapse or excess traffic. 
- An algorithm equivalent to the generic cell rate algorithm might be used in shaping the transmissions made by the network interface card to a network using ATM. 
There are two major applications of the leaky bucket algorithm. 
- The first is using it as a counter only for checking whether the events or the traffics confirm to the defined limits or not.
- Whenever a packet arrives at the check point, the counter is incremented. 
This is same as adding water to the bucket in an intermittent way. 
- In the same way, the counter is decremented as the water leaks out at a constant rate. 
- Because of this, the conformance of the packet to the burstiness and bandwidth limits is indicated by the value of this counter whenever a packet arrives. 
- Or if an event occurs, the counter checks whether it confirms to the peak and average rate limits. 
- So, when the packets arrive or an event occurs, water is added to the bucket and then leaks out. We call this version of the leaky bucket algorithm as a meter.
- Another application of the leaky bucket algorithm involves its use as queue implemented for controlling the flow of traffic. 
- This queue maintains a direct control over the flow. 
- When the packets arrive, they are put in to the queue. 
- This is same as adding water to the bucket. 
- The packets are then removed in the order they arrived at a constant rate. 
This is same as water leaking out. 
- As a result of this, there is no jitter or burstiness in the traffic flow.

Applications of Token Bucket Algorithm
- The token bucket algorithm finds its application in the telecommunications and packet switched computer networks.
- This algorithm is implemented for checking whether the data transmissions confirm to the burstiness and bandwidth predefined limits. 
- The token bucket algorithm used in traffic policing and traffic shaping. 
- In the former, the packets that are non-conformant are discarded or assigned low priorities. 
- This is done for the management of the downstream traffic. 
- On the other hand, the packets are kept in delay unless they are conformed in traffic shaping. 
- Both of these are used in protecting the network against the burstiness of the traffic. 
- Bursty traffic gives rise to congestion problems. 
- These algorithms help in managing the bandwidth as well congestion of the network. 
- Network interfaces commonly use the traffic shaping process for preventing the discarding of the transmissions by the network’s traffic management functions. 
- This algorithm is based up on the analogy of a bucket with fixed capacity. 
Tokens are added to this bucket at a fixed rate and represent a single packet of a fixed size. 
- When the packet has to be checked whether it confirms to the predefined limits or not, first the bucket is checked if it contains sufficient tokens. 
- If sufficient tokens are there, tokens equal to the number of bytes in the packet are removed and the packet is transmitted. 
- If sufficient tokens are not there, the packet is said to be non-conformant and the number of tokens in the bucket remain unchanged.




Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Tuesday, August 27, 2013

What are general principles of congestion control?

- Problems such as the loss of data packets occur if the buffer of the routers overflows.
- This overflow is caused by the problem of the congestive collapse which is a consequence of the network congestion. 
- If the packets have to be re-transmitted more than once, it is an indication that the network is facing the problem of congestion. 
- Re-transmission of the packets is the treatment of only this indication but not for problem of the network congestion. 
- In the problem of congestive collapse, there are a number of sources that make attempts for sending data and that too at a quite high rate. 
- For preventing this problem of the network congestion, it requires mechanisms that are capable of throttling the sending node if in case the problem of network congestion occurs. 
- Network congestion is a real bad thing as it manifests in the network’s performance that the upper layer applications receive. 
- There are various approaches available for preventing and avoiding the problem of network congestion and thus implementing proper congestion control. 
- When the capacity of the network is exceeded by the demands for the resources and too much queuing occurs in the network causing loss of packets, congestion of packets is said to occur. 
- During this problem of network congestion, the throughput of the network might drop down to zero and there might be a high rise in the path delay. 
Network can recover from the state of congestive collapse using a congestion control scheme. 
- A network can operate in a region where there is high throughput but low delay with the help of the congestion avoidance scheme.
- These schemes keep the network away from falling in to a state of congestive collapse. 
- There is a big confusion over congestion control and congestion avoidance. Most of us think it is the same thing but it is not. 
- Congestion control provides a recovery mechanism whereas the congestion avoidance provides a prevention mechanism. 
- Today’s technological advances in the field of networking have led to a rise in the network links’ bandwidth. 
- In the year of 1970, ARPAnet came in to existence and built using the leased telephone lines that had a 50 kbits/second bandwidth. 
- LAN (local area network) was first developed in the year of 1980 using token rings and Ethernet and offered a bandwidth of 10 mbits/ second. 
- During the same time many efforts were made for standardizing the LAN using the optical fibers providing a 100 mbits/seconds and higher bandwidth. 
- Attention to the congestion control has been increased because of the increase in the mismatching that occurs between the various links composing the network. 
- Routers, IMPs, gateways, intermediate nodes links etc. are the hot-spots for the congestion problems. 
- It is at these spots that the bandwidth of the receiver falls short for accommodating all the incoming traffic. 
- In the networks using the connection-less protocols, it is even more difficult to cope with the problems of network congestion. 
- It is comparatively easy in the networks using the connection-oriented protocols.
- This happens so because in such networks, the network resources are kept under advance reserve during setting up the connection.
- One way for controlling congestion problems is preventing the setting up of new connections if congestion is detected anywhere in the network but it will also prevent the usage of the reserved resources which is a disadvantage. 


Monday, August 26, 2013

What is the difference between congestion control and flow control?

Flow control and congestion control are similar sounding concepts and often confuse us sometimes. In this article we shall discuss about the differences between these two. 

- Computer networks use the flow control mechanism for keeping control over the data flow between two nodes in such a way that the receiver if it is slower when compared to the sender is not outrun by it. 
- The mechanism of flow control also provides ways to the receiver to maintain control over the speed with which it transmits the information.
- On the other side, the congestion control provides mechanism for the controlling the data flow under the condition of actual congestive collapse. 
- The mechanism keeps a control over the entry of data in to the network so that this traffic can be handled by the network effectively.  
- The mechanism of flow control does not let the receiving node get overwhelmed by the traffic that is being sent by another node. 

There are several reasons why this flow of data gets out of control and affects the network negatively. 
- First reason being that the receiving node might not be capable of processing the incoming data as fast as it is being sent by the sender node. 
Based on these reasons there are various types of flow control mechanisms available. 
- However, the most common categorization is based on the fact whether the feedback is being sent to the sender or not. 
- There is another flow control mechanism called the open loop flow control mechanism. 
- In this mechanism no feedback is sent to the sender by the receiver and this perhaps the most widely used flow control mechanism. 
- Opposite of open loop flow control mechanism is the closed loop flow control. 
- In this mechanism, the receiver sends back congestion information to the sender. 
- Other commonly used flow control mechanisms are:
Ø  Network congestion
Ø  Windowing flow control
Ø  Data buffer etc.

- Congestion control offers such methods that can be used for regulating the incoming traffic in the network to such an extent where the network itself can manage all that.
- In congestion control, the network is prevented from falling in to a state of congestive collapse. 
- In such a state either little or no communication happens.
- This little communication is of no help. 
- Switching networks usually require congestion control measures than any other type of networks. 
- The congestion control is driven by the goal of keeping the number of data packets at such a level that the performance of the network would be reduced dramatically.
- Congestion control mechanism can be seen even in protocols such as UDP (user datagram protocol), TCP (transport control protocol) and other transport layer protocols. 
- TCP makes use of the exponential back off and slow start algorithms. 
- We classify the congestion control algorithms based up on the feedback that is given by the network, the performance aspect that has to be improved, and modifications that have to be made for the present network, fairness criterion that is being used and so on. 

- Congestion and flow control are two very important mechanisms used for keeping the traffic flow in order. 
- Flow control is a mechanism that stretches from one end to another i.e., between the sender and the receiver where the speed of sender is much higher than that of the receiving node. 
- Congestion control is implemented for preventing packet loss as well as delay that is caused as a side effect of the network congestion. 
- Congestion is meant for controlling the traffic of the entire whereas flow control is limited to transmission between two nodes.


Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Facebook activity