Subscribe by Email


Showing posts with label Network Congestion. Show all posts
Showing posts with label Network Congestion. Show all posts

Tuesday, August 27, 2013

What are general principles of congestion control?

- Problems such as the loss of data packets occur if the buffer of the routers overflows.
- This overflow is caused by the problem of the congestive collapse which is a consequence of the network congestion. 
- If the packets have to be re-transmitted more than once, it is an indication that the network is facing the problem of congestion. 
- Re-transmission of the packets is the treatment of only this indication but not for problem of the network congestion. 
- In the problem of congestive collapse, there are a number of sources that make attempts for sending data and that too at a quite high rate. 
- For preventing this problem of the network congestion, it requires mechanisms that are capable of throttling the sending node if in case the problem of network congestion occurs. 
- Network congestion is a real bad thing as it manifests in the network’s performance that the upper layer applications receive. 
- There are various approaches available for preventing and avoiding the problem of network congestion and thus implementing proper congestion control. 
- When the capacity of the network is exceeded by the demands for the resources and too much queuing occurs in the network causing loss of packets, congestion of packets is said to occur. 
- During this problem of network congestion, the throughput of the network might drop down to zero and there might be a high rise in the path delay. 
Network can recover from the state of congestive collapse using a congestion control scheme. 
- A network can operate in a region where there is high throughput but low delay with the help of the congestion avoidance scheme.
- These schemes keep the network away from falling in to a state of congestive collapse. 
- There is a big confusion over congestion control and congestion avoidance. Most of us think it is the same thing but it is not. 
- Congestion control provides a recovery mechanism whereas the congestion avoidance provides a prevention mechanism. 
- Today’s technological advances in the field of networking have led to a rise in the network links’ bandwidth. 
- In the year of 1970, ARPAnet came in to existence and built using the leased telephone lines that had a 50 kbits/second bandwidth. 
- LAN (local area network) was first developed in the year of 1980 using token rings and Ethernet and offered a bandwidth of 10 mbits/ second. 
- During the same time many efforts were made for standardizing the LAN using the optical fibers providing a 100 mbits/seconds and higher bandwidth. 
- Attention to the congestion control has been increased because of the increase in the mismatching that occurs between the various links composing the network. 
- Routers, IMPs, gateways, intermediate nodes links etc. are the hot-spots for the congestion problems. 
- It is at these spots that the bandwidth of the receiver falls short for accommodating all the incoming traffic. 
- In the networks using the connection-less protocols, it is even more difficult to cope with the problems of network congestion. 
- It is comparatively easy in the networks using the connection-oriented protocols.
- This happens so because in such networks, the network resources are kept under advance reserve during setting up the connection.
- One way for controlling congestion problems is preventing the setting up of new connections if congestion is detected anywhere in the network but it will also prevent the usage of the reserved resources which is a disadvantage. 


Saturday, August 24, 2013

How can the problem of congestion be controlled?

Networks often get trapped in the situation of what we call network congestion. For avoiding such collapses, congestion avoidance and congestion control techniques are often used by the networks nowadays. 

In this article, we discuss about how we can control the problem of network congestion using these techniques. Few very common techniques are:
  1. Exponential back off (used in CSMA/ CA protocols and Ethernet.)
  2. Window reduction (used in TCP)
  3. Fair queuing (used in devices such as routers)
  4. The implementation of the priority schemes is another way of avoiding the negative effects of this very common problem. Priority schemes let the network transmit the packets having higher priority over the others. This way only the effects of the network congestion can be alleviated for some important transmissions. Priority schemes alone cannot solve this problem.
  5. Another method is the explicit allocation of the resources of the network to certain flows. This is commonly used in CFTXOPs (contention – free transmission opportunities) providing very high speed for LAN (local area networks) over the coaxial cables and phone lines that already exist.
- The main cause of the problem of network congestion is the limited capacity of the network. 
- This is to say that the network has limited. 
- The resources also include the link throughput and the router processing time. 
- Congestion control is concerned with curbing the entry of the traffic in to the telecommunications network so that the problem of congestive collapse can be avoided. 
- The over-subscription of the link capabilities is avoided and steps are taken to reduce the resources. 
- One such step is reducing the packet transmission rate. 
- Even though if it sounds similar to flow control, it is not the same thing. 
- Frank Kelly is known as the pioneer of the theory of congestion control. 
- For describing the way in which the network wide rate allocation can be optimized by the individuals by controlling their rates, he used two theories namely the convex optimization theory and the micro economics theory. 

Some optimal rate allocation methods are:
Ø  Max – min fair allocation
Ø  Kelly’s proportional fair allocation

Ways to Classify Congestion Control Algorithm

There are 4 major ways for classifying the congestion control algorithms:
  1. Amount as well as type of feedback: This classification involves judging the algorithm on the basis of multi-bit or single bit explicit signals, delay, loss and so on.
  2. The performance aspect taken for improvement: Includes variable rate links, short flow advantage, fairness, links that can cause loss etc.
  3. Incremental deployability: Modification is the need of sender only, modification is required by receiver and the sender, modification is needed only by the router, and modification is required by all three i.e., the sender, receiver and the router.
  4. Fairness criterion being used: It includes minimum potential delay, max – min, proportional and so on.
Two major components are required for preventing network congestive collapse:
  1. End to end flow control mechanism: This mechanism has been designed such that it can respond well to the congestive collapse and thus behave accordingly.
  2. Mechanism in routers: This mechanism is used for dropping or reordering packets under the condition of overload.

- For repeating the dropped information correct behavior of the end point is required. 
- This indeed slows down the information transmission rate. 
- If all the end points exhibit this kind of behavior, the congestion would be lifted from the network. 
- Also, all the end points would be able to share the available bandwidth fairly. - Slow start is another strategy using which it can be ensured that the router is not overwhelmed by the new connections before congestion can be detected. 


Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Facebook activity