Subscribe by Email


Saturday, August 31, 2013

What is the difference between leaky bucket algorithm and token bucket algorithm?

- Telecommunications networks and the packet switched computer networks make use of the leaky bucket algorithm for checking the data transmissions. 
This check is carried out in the form of packets. 

About Leaky Bucket Algorithm
- This algorithm is used for determining whether the data transmissions confirm to the limits that have been defined for the burstiness and bandwidth. 
Leaky bucket counters also use the leaky bucket algorithm for detecting the peak or the average rate of the stochastic or random events and processes and if they are exceeding the predefined limits. 
We shall take analogy of a bucket for explaining this algorithm.
Consider a bucket having a hole in its bottom through which the water it has will leak away. 
- The rate of leakage is constant if it is not empty. 
- We can intermittently add water to it that is in short bursts. 
- But if a large amount of water is added to it in one go, the water will exceed the bucket’s capacity and overflow will occur. 
- Hence, it is determined using this leaky bucket algorithm that whether or not adding water to it will make up the average rate or will exceed it. 
- Leak rate sets the average rate of adding the water and depth of the bucket decides the amount of water to be added. 
- Asynchronous transfer mode networks use the generic cell rate algorithm which is one of the versions of the leaky bucket algorithms. 
- At the user network interfaces, these algorithms are used in the usage/ network parameter control. 
- The algorithm is also used in network-network interfaces and inter-network interfaces for protecting networks from the overwhelming traffic levels through the connections in the network. 
- A network interface card can be used on a network using ATM for shaping the transmissions. 
- This network interface card might use an equivalent of the generic cell rate algorithm or this algorithm itself.
The leaky bucket algorithm can be implemented in two different ways both of which are mentioned in the literature. 
- It appears as if there are two distinct algorithms that are together known as the leaky bucket algorithm.

About Token Bucket Algorithm

- At an interval of every 1/r seconds the token bucket algorithm adds a token to a bucket. 
- The maximum number of tokens that can be handled by a bucket are b. 
- Any token above this limit is rejected by the bucket. 
- When the bucket receives a packet from the network layer consisting of n bytes, the n numbers of tokens are taken out from the bucket and then the packet is transmitted in to the network. 
- If number of tokens available is less than n, the packet is treated as being non-conformant. 
- A bucket with a fixed capacity is associated with some virtual user and the rate at which it leaks is fixed. 
- No leakage occurs if there is nothing in the bucket. 
- Some water has to be added to the bucket in order to make the packet conform-ant. 
- No water is added to the bucket if adding this amount of water will cause the bucket to exceed its capacity. 
- Therefore, we can see that one algorithm adds something constantly to the bucket and removes also for conforming packets. 
- The other algorithm removes something constantly and adds something for confirming packets. 
- Both the algorithms are same in effectiveness and this is why the two see each the same packet as non-confirming or confirming. 
- The leaky bucket algorithm is often used as meter. 


Friday, August 30, 2013

What is meant by flow specification?

- There are many problems concerning the flow specification. 
- There are limited options for the provider for mitigation of the DDoS attacks that take place internally. 
- These can be categorized in to three different categories:
Ø  BGP (border gateway protocol) destination black holes
Ø  BGP src/ uRP
Ø  ACLS

- The basic idea is to make use of the BGP for the distribution of the flow specification filters. 
- This helps in dynamic filtering in the routers. 
- The flow specification rules are encoded according to the BGP NLRI address family. 
- The flow spec NLRI is used by the BGP as its opaque key is used as an entry key for its database. 
- The extended communities are used for specifying the actions such as accepting, discarding it, rate limiting, sampling, redirecting and so on. 
- The source/destination prefix and the source/destination port are matched in combinations according to the packet size, ICMP type/co9de, fragment encoding, DSCP, TCP flag and so on. 
- For example, the TCP ports 80…90 are matched with 192.168.0/24. 
- The flow specification trust model uni casts the routing advertisements for controlling the traffic. 
- Filter is considered as a hole for the traffic that is being transmitted to some destination. 
- Filter is accepted when it is advertised for the destination by the next hop. 
Filters with various flow specifications are available today.
- The major benefit of the flow specifications is the filters with the fine grain specification which make it easy for deploying and managing the BGP. 
- The trust and the distribution problems are solved by the BGP. 
- ASIC filtering in routers is leveraged. 
- This is another major benefit of flow specifications. 
Apart from the benefits, there are various limitations of the flow specifications as mentioned below:
Ø  There is no update level security in the BGP.
Ø The statistics and the application level acknowledgement are not well defined.
Ø  The flow specifications work only for those nodes for which the BGP has been enabled.
Ø  Beyond routing the BGP payload has to be overloaded.
Ø  There are various operational issues between the security operations and the network operations.
Ø  The threat information cannot be gathered in one place.

- The integration of the flow specifications was announced by various security vendors. 
- The DDoS attacks are experienced by a large number of customers. 
- The DDoS attacks are now massive and have put the network infrastructure at risk apart from the end customer. 
- Congestion problems occur at both the exchange and the backbone. 
- The attacks of long durations add to the cost of bursting and circuit congestion problems. 
- Depending up on the size of the attack the POP has to be isolated.
- VoIP is also affected. 
- These attacks have negative economic effects as the cost of the operations has been increased. 
- This has led to a degradation of the business. 
- Measures such as firewall filtering and destination BGP black-holing have proved to be insufficient in preventing the attacks. 
- These methods are slow since it is required to log-in and configuring the devices. 
- The configuration has to be constantly. 
- The traffic is terminated to some destination. 
- This affects the availability. 
- The black hole routes are removed by constantly changing the configurations. - Earlier version of the flow specifications had many bugs. 
- There were some limitations on the performance. 
- However, it provided arbor support for the actions of the flow specifications. 
It does not provide multi–vendor support. 
- To some extent it provides the mitigation facility for the attack that occurred at the source. 
- The collateral damage is eliminated for both the carriers and supports the change in the matching criteria. 


Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Tuesday, August 27, 2013

What are general principles of congestion control?

- Problems such as the loss of data packets occur if the buffer of the routers overflows.
- This overflow is caused by the problem of the congestive collapse which is a consequence of the network congestion. 
- If the packets have to be re-transmitted more than once, it is an indication that the network is facing the problem of congestion. 
- Re-transmission of the packets is the treatment of only this indication but not for problem of the network congestion. 
- In the problem of congestive collapse, there are a number of sources that make attempts for sending data and that too at a quite high rate. 
- For preventing this problem of the network congestion, it requires mechanisms that are capable of throttling the sending node if in case the problem of network congestion occurs. 
- Network congestion is a real bad thing as it manifests in the network’s performance that the upper layer applications receive. 
- There are various approaches available for preventing and avoiding the problem of network congestion and thus implementing proper congestion control. 
- When the capacity of the network is exceeded by the demands for the resources and too much queuing occurs in the network causing loss of packets, congestion of packets is said to occur. 
- During this problem of network congestion, the throughput of the network might drop down to zero and there might be a high rise in the path delay. 
Network can recover from the state of congestive collapse using a congestion control scheme. 
- A network can operate in a region where there is high throughput but low delay with the help of the congestion avoidance scheme.
- These schemes keep the network away from falling in to a state of congestive collapse. 
- There is a big confusion over congestion control and congestion avoidance. Most of us think it is the same thing but it is not. 
- Congestion control provides a recovery mechanism whereas the congestion avoidance provides a prevention mechanism. 
- Today’s technological advances in the field of networking have led to a rise in the network links’ bandwidth. 
- In the year of 1970, ARPAnet came in to existence and built using the leased telephone lines that had a 50 kbits/second bandwidth. 
- LAN (local area network) was first developed in the year of 1980 using token rings and Ethernet and offered a bandwidth of 10 mbits/ second. 
- During the same time many efforts were made for standardizing the LAN using the optical fibers providing a 100 mbits/seconds and higher bandwidth. 
- Attention to the congestion control has been increased because of the increase in the mismatching that occurs between the various links composing the network. 
- Routers, IMPs, gateways, intermediate nodes links etc. are the hot-spots for the congestion problems. 
- It is at these spots that the bandwidth of the receiver falls short for accommodating all the incoming traffic. 
- In the networks using the connection-less protocols, it is even more difficult to cope with the problems of network congestion. 
- It is comparatively easy in the networks using the connection-oriented protocols.
- This happens so because in such networks, the network resources are kept under advance reserve during setting up the connection.
- One way for controlling congestion problems is preventing the setting up of new connections if congestion is detected anywhere in the network but it will also prevent the usage of the reserved resources which is a disadvantage. 


Keeping track of commitments made by team members for the future - tracking of issues

One of the minor issues that threaten to become major issues during tight projects is about tracking of issues, or in the case of this post, more about tracking the work that other people were expected to do. Consider the case where there was a meeting on a critical issue (and during a tight software development cycle, there can be many such meetings to resolve many such critical issues - during a tough cycle, even a small delay in a feature can become critical since the buffer to accept and incorporate such delays is no longer present in the schedule). So, you had a meeting a couple of weeks back, where there were some action items on various team members which would have helped in resolution of the issue. You are the Program Manager or the Project Manager of the team, and handling such tracking typically falls on your head.
Since there were a number of such issues, somehow the expected progress on the critical issues did not happen, and as it turned out, the people who were expected to provide some updates, or if you consider a specific example, the workflow designer was to provide a new updated specification for a feature or a part of a feature. You were so busy that communication with the designer to remind him or her about this delivery did not happen (which was a mistake, but can happen unless you are designed with a highly organized mind). Suddenly somebody remembers, and then it falls on your head about why the designer did not provide an update; that you did not send the reminder. It is almost like it is not the problem of the designer anymore, but your problem.
Such kind of situations are very uncomfortable to be in, so you need to ensure that you avoid being in such situations as far as possible. It is not very difficult to organize your work in such a way that you stay out of these situations. Just a few tips are enough:
- Keep a minute of the meetings including the action items and send them out to the people present (this must be happening regularly)
- Add these action items in some sort of tool that will send you a reminder (having a tool that sends an automated reminder to another team member will probably not work when the entire schedule is tight)
- Every day, either at the beginning of the day, or at the end of the day, review all these items and ensure that you are updating these items. In some cases, the need for the action item would have vanished because of some other changes, and you should remove these items; or the need for the item would have got delayed
- Be sure that your team members already know that you will be doing this process and they will get reminders; in many cases, team members don't like to get such reminders, and they would already have noted the action item directly and will send an update by the desired time. However, if you were to explicitly ask them to do the same, they might not (the wonders of the human nature are incredible).


Monday, August 26, 2013

What is the difference between congestion control and flow control?

Flow control and congestion control are similar sounding concepts and often confuse us sometimes. In this article we shall discuss about the differences between these two. 

- Computer networks use the flow control mechanism for keeping control over the data flow between two nodes in such a way that the receiver if it is slower when compared to the sender is not outrun by it. 
- The mechanism of flow control also provides ways to the receiver to maintain control over the speed with which it transmits the information.
- On the other side, the congestion control provides mechanism for the controlling the data flow under the condition of actual congestive collapse. 
- The mechanism keeps a control over the entry of data in to the network so that this traffic can be handled by the network effectively.  
- The mechanism of flow control does not let the receiving node get overwhelmed by the traffic that is being sent by another node. 

There are several reasons why this flow of data gets out of control and affects the network negatively. 
- First reason being that the receiving node might not be capable of processing the incoming data as fast as it is being sent by the sender node. 
Based on these reasons there are various types of flow control mechanisms available. 
- However, the most common categorization is based on the fact whether the feedback is being sent to the sender or not. 
- There is another flow control mechanism called the open loop flow control mechanism. 
- In this mechanism no feedback is sent to the sender by the receiver and this perhaps the most widely used flow control mechanism. 
- Opposite of open loop flow control mechanism is the closed loop flow control. 
- In this mechanism, the receiver sends back congestion information to the sender. 
- Other commonly used flow control mechanisms are:
Ø  Network congestion
Ø  Windowing flow control
Ø  Data buffer etc.

- Congestion control offers such methods that can be used for regulating the incoming traffic in the network to such an extent where the network itself can manage all that.
- In congestion control, the network is prevented from falling in to a state of congestive collapse. 
- In such a state either little or no communication happens.
- This little communication is of no help. 
- Switching networks usually require congestion control measures than any other type of networks. 
- The congestion control is driven by the goal of keeping the number of data packets at such a level that the performance of the network would be reduced dramatically.
- Congestion control mechanism can be seen even in protocols such as UDP (user datagram protocol), TCP (transport control protocol) and other transport layer protocols. 
- TCP makes use of the exponential back off and slow start algorithms. 
- We classify the congestion control algorithms based up on the feedback that is given by the network, the performance aspect that has to be improved, and modifications that have to be made for the present network, fairness criterion that is being used and so on. 

- Congestion and flow control are two very important mechanisms used for keeping the traffic flow in order. 
- Flow control is a mechanism that stretches from one end to another i.e., between the sender and the receiver where the speed of sender is much higher than that of the receiving node. 
- Congestion control is implemented for preventing packet loss as well as delay that is caused as a side effect of the network congestion. 
- Congestion is meant for controlling the traffic of the entire whereas flow control is limited to transmission between two nodes.


Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Facebook activity