Subscribe by Email


Showing posts with label Layers. Show all posts
Showing posts with label Layers. Show all posts

Tuesday, October 1, 2013

How can firewalls secure a network?

Firewalls in computer systems are either software based or hardware based. But they have the same purpose of keeping a control over the incoming as well as the outgoing traffic. 
In this article we discuss about how the network is secured by the firewalls. 
This control is maintained through the analyzation of the data packets. 
- After analyzation, the firewall’s work is to determine whether to allow these packets to pass or not. 
- This decision is taken based up on some set of rules.
- With this set of rules, a barrier is established by the firewall between the external network that is not considered as secure and trusted and the internal network which is secure and trusted. 
- Most of the personal computer’s operating systems come with a built-in software based firewall for providing protection against the threats from external networks. 
- Some firewall components might also be installed in the intermediate routers in the network. 
- Also some firewalls have been designed to perform routing as well.

There are different types of firewalls which function differently.This classification of the firewalls is based up on the place where the communication is taking place i.e., whether at the network layer or the application layer.

Packet filters or network layer: 
- Firewalls used at the network layer are often termed as the packet filters. 
This firewall operates at low level of the protocol stack of the TCP/ IP and so does not allow the packets to pass through it unless they satisfy all the rules. 
These rules might be defined by the administrator of the firewall. 
- These firewalls can also be classified in to two categories namely the stateless firewalls and the state-ful firewall
- The former kind use less memory and operates faster in the simple filters, thus taking less time for filtering. 
- These firewalls are used for filtering the stateless network protocols i.e., the protocols which do not follow the session concept. 
- These firewalls are not capable of making complex decisions based up on the state of the communication. 
- The latter kind maintains the context of the active sessions. 
- This state info is used by these firewalls for speeding up the packet processing. 
- A connection is described using any of the properties such as the UDP or TCP ports, IP addresses and so on. 
- If a match is found between an existing connection and the packet, it is allowed to pass. 
- Today firewalls have capabilities of filtering the packets based up on attributes like IP addresses of source and destination hosts, protocols, originator’s netblock, TTL values and so on.

Application layer Firewalls: 
- Firewalls of this type work on the TCP/ IP stack’s application level. 
- All the packets traveling in and out of the application are intercepted by this firewall. 
- This leads to blocking of the other packets also. 
- Firstly, all the packets are inspected for any malicious content for preventing the outspread of the Trojans and worms. 
- Some additional inspection criteria might be used for adding some extra latency to the packet forwarding. 
- This firewall determines whether a given connection should be accepted by a process. 
- This function is established by the firewalls by hooking themselves in to the socket calls for filtering the connections. 
- These application layer firewalls are then termed as the socket filters.
- There way of working is somewhat similar to the packet filters except that the rules are applied to every process rather than connections. 
- Also, the rules are defined using the prompts for those processes that have not been provided with a connection. 
- These firewalls are implemented in combination with the packet filters.




Saturday, September 21, 2013

What are the services provided to upper layers by transport layer?

In the field of computer networking, the purpose of the 4th layer or the transport layer is to provide services for the end to end communication for the various operating applications. The services are provided within an architectural framework that consists of protocols and the components and is layered. It also offers convenient services such as the following:
Ø  Connection – oriented data stream support
Ø  Reliability
Ø  Flow control
Ø  Multiplexing and so on.

- Both the OSI (open systems interconnection) and TCP/ IP model include the transport layer. 
- The foundation of the internet is based up on the TCP/ IP model whereas for the general networking, the OSI model is followed. 
- However, the transport layer is defined differently in both of these models. Here we shall discuss about the transport layer in the TCP model since it is used for keeping the API (application programming interface) convenient to the internet hosts. 
- This is in contrast with the definition of the transport layer in the OSI model. 
TCP (transmission control protocol) is the most widely used transport protocol and so the internet protocol suite has been named after it i.e., the TCP/ IP. 
- It is a connection-oriented transmission protocol and so it is quite complex. 
This is also because it incorporates reliable data stream and transmission services in to its state-ful design. 
- Not only TCP there are other protocols in the same category such as the SCTP (stream control transmission protocol) and DCCP (datagram congestion control protocol).

Now let us see what all services are provided by the transport layer to its upper layers:
ØConnection-oriented communication: It is quite easy for the application for interpreting the connection as a data stream instead of having to cope up with the connectionless models that underlie it. For example, internet protocol (IP) and the UDP’s datagram protocol.
Ø Byte orientation: Processing the data stream is quite easy when compared with using the communication system format for processing the messages. Because of such simplification, it becomes possible for the applications to work up on message formats that underlie.
Ø  Same order delivery: Usually, it is not guaranteed by the transport layer that the data packets will be received in the same order in which they were sent. But this is one of the desired features of the transport layer. Segment numbering is used for incorporating this feature. The data packets are thus passed on to the receiver in order. Head of line blocking is a consequence of implementing this.
Ø  Reliability: During the transportation some data packets might be lost because of errors and problems such as network congestion. By using error detection mechanism such as CRC (cyclic redundancy check), the data might be checked by the transport protocol for any corruption and for the verification whether the correct reception of the data by either sending a NACK or an ACK signal to the sending host. Some schemes such as the ARR (automatic repeat request) are sometimes used for the retransmission of the corrupted or the lost data.
Ø  Flow control: The rate with which the data is transmitted between two nodes is managed for preventing a sending host with a fast speed from the transmission of data more than what the receiver’s data buffer can take at a time. Otherwise it might cause a buffer overrun.

Ø  Congestion avoidance: Traffic entry in to the network can be controlled by means of congestion control by avoiding congestive collapse. The network might be kept in a state of congestive collapse by automatic repeat requests. 


Thursday, September 19, 2013

What is fragmentation?

- The fragmentation technique is implemented in the IP (internet protocol) for breaking down the datagrams into smaller pieces. 
- This is done so that it becomes easy for the data packets to be passed through the link with a datagram size smaller than that of the original MTU or the maximum transmission unit. 
- The procedure for the IP fragmentation along with the procedures for reassembling and transmitting the datagrams is given in the RFC 791. 
- For determining the optimal MTU path, the IPv6 hosts are needed so that the packets can be sent. 
- If in case the PDU i.e., the protocol data unit received by the router is larger than the MTU of the next hop, then there are two options are available if IPv4 transport is being used:
Ø Dropping the PDU and sending an ICMP (internet control message protocol) message indicating that the condition packet is quite big.
Ø  Fragmenting the IP packet and then transmitting it over the link whose MTU is smaller. Any IPv6 packet with a size less than or equal to 1280 bytes can be delivered without having the need for using the IPv6 fragmentation.

- If a fragmented IP packet is received by the recipient host, its job is to reassemble the datagram and then send it over to the protocols at the higher layers. 
- The purpose of reassembling is expected to take place at the recipient’s host side but for some practical reasons it might be done by some intermediate router. 
- For example, the fragments might be reassembled by the NAT (network address translation) for translating the data streams. 
- Excessive re-transmission can result as a consequence of the IP fragmentation whenever packet loss might be encountered by the fragments. 
It is required for all the reliable protocols (example, TCP) for re-transmitting the fragments in their correct order for recovering from the single fragment loss. 
Thus, typically two approaches are used by the senders for determining datagrams of what size should be transmitted over the network:
  1. First approach: The sender must transmit an IP datagram of size as same as that of the first hop’s MTU.
  2. Second approach: Running the path MTU discovery algorithm.

- Fragmentation does leave an impact on the network forwarding. 
- When there are multiple parallel paths for the internet router the traffic is split by the technologies such as the CEF and LAG throughout the links via some hash algorithms. 
- The major goal of this algorithm is to make sure that all the packets with the same flow are transmitted out on the same path for the minimization of the not so required packet reordering. 
- If the TCP or UDP port numbers are used by the hash algorithm, the fragmented packets might be forwarded through different paths. 
- This is so because the layer 4 information is contained only in the first fragment of the packet. 
- As a result of this, usually the initial fragment arrives after the non-initial fragments. 
- This condition is often treated as an error by most of the security devices in the hosts.  
- Therefore, they drop these packets.
- The fragmentation mechanism differs in IPv4 and IPv6. 
- In the former, the fragmentation is performed by the router. 
- On the other hand, in IPv6 fragments that are larger than MTU are dropped by the routers.
- Also, in both the cases there is a variation in the header format. 
- Since fragmentation is carried out using analogous fields, therefore the algorithm can be used again and again for the purpose of fragmentation and reassembling. 
- A best effort should be made by the IPv4 hosts for reassembling the datagram fragments. 


Wednesday, September 18, 2013

What are the advantages and disadvantages of datagram approach?

- Today’s packet switching networks make use of a basic transfer unit commonly known as the datagram. 
- In such packet switched networks, the order of the data packets arrival, time of arrival and delivery comes with no guarantee. 
- The first packet switching network to use the datagrams was CYCLADES. 
Datagrams are known by different names at different levels of the OSI model. 
- For example, at layer 1 we call it Chip, at layer 2 it is called Frame or cell, data packet at layer 3 and data segment at layer 4. 
- The major characteristic of a datagram is that it is independent i.e., it does not rely on any other thing for the information required for exchange.
- The duration of a connection between any two points is not fixed such as in telephone conversations. 
- Virtual circuits are just the opposite of the datagrams. 
- Thus, a datagram can be called as a self containing entity. 
- It consists of information sufficient for routing it from the source to the destination without depending up on the exchanges made earlier. 
- Often, a comparison is drawn between the mail delivery service and the datagram service. 
- The user’s work is to just provide the address of the destination. 
- But he/she is not guaranteed the delivery of the datagram and if the datagram is successfully delivered, no confirmation is sent to the user. 
- The data gram are routed to some destination without help of a predetermined path. 
- The order in which the data has to be sent or received is given no consideration. 
- It is because of this that the datagrams belonging to a single group might travel over different routes before they reach their common destination. 

Advantages of Datagram Approach
  1. Datagrams can contain the full destination address rather than using some number.
  2. There is no set up phase required for the datagram circuits. This means that no resources are consumed.
  3. If it happens during a transmission that one router goes down, the datagrams that will suffer will include only those routers which would have been queued up in that specific router. The other datagrams will not suffer.
  4. If any fault or loss occurs on a communication line, the datagrams circuits are capable of compensating for it.
  5. Datagrams play an important role in the balancing of the traffic in the subnet. This is so because halfway the router can be changed.
Disadvantages of Datagram Approach

  1. Since the datagrams consist of the full destination address, they generate more overhead and thus lead to wastage of the bandwidth. This in turn makes using datagram approach quite costly.
  2. A complicated procedure has to be followed for datagram circuits for determining the destination of the packet.
  3. In a subnet using the datagram approach, it is very difficult to keep congestion problems at bay.
  4. The any-to-any communication is one of the key disadvantages of the datagram subnets. This means that if a system can communicate with any device, any of the devices can communicate with this system. This can lead to various security issues.
  5. Datagram subnets are prone to losing or re - sequencing the data packets during the transition. This puts a great burden on the end systems for monitoring, recovering, and reordering the packets as they were originally.
  6. Datagram subnets have less capability of dealing with congestion control as well as flow control. This happens because the direction of the incoming traffic is not specified. In the virtual circuit subnets, the flow of the packets is directed only along the virtual circuits thus making it comparatively easy for controlling it.
  7. The unpredictable nature of the flow of the traffic makes it difficult to design the datagram networks


Wednesday, September 11, 2013

What are transport and application gateways?

- Hosts and routers are separated in TCP/IP architecture. 
- For private networks, more protection is required to maintain an access control over it. 
- Firewall is one of the components of this TCP/IP architecture. 
- Internet is separated from Intranet by this firewall.
- This means all the incoming traffic must pass through this firewall. 
- The traffic that is authorized is allowed to pass through. 
- It is not possible penetrate the firewall simply. 
Firewall has two components namely:
Ø  Filtering router and
Ø  Two types of gateways namely application and transport gateways.
- All the packets are checked by the router and filtered based up on any of the attributes such as protocol type, port numbers, and TCP header and so on. 
Designing the rules for filtering of the packets is quite a complex task. 
- A little protection is offered by this packet filtering since with the filtering rules on one side, it is difficult to cater to the services of the users on other side.

About Application Gateways
- Application layer gateways consist of 7 layer intermediate system designed mainly for the access control. 
- However, these gateways are not commonly used in the TCP/ IP architecture. 
- These gateways might be used sometimes for solving some inter-networking issues. 
- The application gateways follow a proxy principle for supporting the authentication, restrictions on access controls, encryption and so on. 
- Consider two users A and B. 
- A generates an HTTP request which is first sent to the application layer gateway rather than being send to its destination. 
- The gateway checks about the authorization of this request and performs encryption. 
- After the request has been authorized, it is sent to user B from the gateway just at it would have been sent by A.
- B responds back with a MIME header and data which might be de-crypted or rejected by the gateway.
- If the gateway accepts, it is sent to A as if from B. 
- These gateways are designed for all the protocols of application level.


About Transport Gateways
- The working of the transport gateway is similar to application gateway but it works at the TCP connection level. 
- These gateways are not dependent up on the application code but they do need client software so as to maintain awareness about the gateway. 
Transport gateways are intermediate systems at layer 4. 
- An example is the SOCKS gateways. 
- IETF has defined it as a standard transport gateway.
- Again, consider two clients A and B. 
- A TCP connection is opened by A to the gateway. 
- The SOCKS server port is nothing but the destination port. 
- A sends a request to this port for opening the connection to B indicating the port number of the destination. 
- After checking the request, the request for connection from A is either accepted or rejected. 
- If accepted, a new connection is opened to B. 
- The server also informs A that the connection has been established successfully. 
- The data relay between the clients is kept transparent. 
- But in actual there are two TCP connections having their own sequence numbers as well as acknowledgements. 
- The transport gateways are simpler when compared with the application layer gateways. 
- This is so because the transport gateways are not concerned with the data units at the application layer. 
- It has to act on the packets simply once the connection has been established. 
Also, this is the reason why it also gives higher performance in comparison with the application layer gateways. 
- But it is important that the client must be aware of its presence since there is no transparency here. 
- If between the two networks the only border existing is the application gateway, it alone can act as the firewall. 


Tuesday, September 10, 2013

What are the differences between bridges and repeaters?

Bridges and repeaters are both important devices in the field of telecommunications and computer networking. In this article we discuss about these two and differences between them. 
- The repeaters are deployed at the physical layer whereas one can find bridges at the MAC layer. 
- Thus, we called repeaters as the physical layer device. 
- Similarly, bridge is known as the MAC layer device. 
- Bridge is responsible for storing as well forwarding the data packets in an Ethernet.
- Firstly, it examines the header of the data frame, selects few of them and then forwards them to the destination address mentioned in the frame. 
- Bridge uses the CSMA/CD for accessing a segment whenever the data frame has to be forwarded to it.
- Another characteristic of a bridge is that its operation is transparent. 
- This means that the hosts in the network do not know that the bridge is also present in the network. 
- Bridges learn themselves; they do not have to be configured again and again. 
They can be simply plugged in to the network. 
- Installing a bridge causes formation of LAN segments by breaking a LAN. 
Packets are filtered with the help of bridges. 
- The frames that belong to one LAN segment are not sent to the other segments. 
- This implies separate collision domains are formed. 
The bridge maintains a bridge table consisting of the following entries:
  1. LAN address of the node
  2. Bridge interface
  3. Time stamp
  4. Stale table entries

- Bridges themselves learn that which interface can be used for reaching which host. 
- After receiving a frame, it looks for the location of the sending node and records it.
- It keeps the collision domains isolated from one another thus, giving the maximum throughput. 
- It is capable of connecting a number of nodes and offer limitless geographical coverage. 
- Even different types of Ethernet can be connected through it. 
- Even the repeaters are plug and play devices but they do not provide any traffic isolation. 
- Repeaters are used for the purpose of regenerating the incoming signals as they get attenuated with time and distance. 
- If physical media such as the wifi, Ethernet etc. is being used, the signals can travel only for a limited distance and after that their quality starts degrading. 
The work of the repeaters is to increase the extent of the distance over which the signals can travel till they reach their destination. 
- Repeaters also provide strength to the signals so that their integrity can be maintained. 
- Active hubs are an example of the repeaters and they are often known as the multi-port repeaters. 
- Passive hubs do not serve as repeaters. 
- Another example of the repeaters are the access points in a wifi network. 
- But it is only in repeater mode that they function as repeaters. 
- Regenerating signals using repeaters is a way of overcoming the attenuation which occurs because of the cable loss or the electromagnetic field divergence. 
For long distances, a series of repeaters is often used. 
- Also, the unwanted noise that gets added up with the signal is removed by the repeaters. 
- The repeaters can only perceive and restore the digital signals.
- This is not possible with the analog signals. 
- Signal can be amplified with the help of amplifiers but they have a disadvantage which is that on using the amplifiers, the noise is amplified as well. 
- Digital signals are more prone to dissipation when compared to analog signals since they are completely dependent up on the presence of the voltages. 
- This is why they have to be repeated again and again using repeaters. 


Saturday, September 7, 2013

Explain the concept of inter-networking?

- The practice in which one computer network is connected with the other networks is called inter-networking. 
- The networks are connected with the help of gateways. 
- These gateways are used since they offer a common method for routing the data packets across the networks.
- The resulting system in which a number of networks are connected is called the inter-network or more commonly as the internet. 
- The terms “inter” and “networking” combine together to form the term “internet working”.  
- Internet is the best and the most popular example of the inter networking. 
Internet has formed as a result of many networks connected with the help of numerous technologies. 
- Many types of hardware technologies underlie the internet. 
- The internet protocol suite (IP suite) is the inter networking protocol standard responsible for unifying the diverse networks. 
- This protocol is more commonly known as the TCP/ IP. 
- Two computer local area networks (LANs) connected to one another by means of a router form the smallest internet but not the inter network. 
Inter networking is not formed by simply connecting two LANs together via a hub or a switch. 
- This is called expansion of the original local area network. 
Inter networking was started as a means for connecting the disparate networking technologies. 
- Eventually, it gained widespread popularity because of the development needs of connecting many local area networks together through some kind of WAN (wide area network). 
- “Catenet” was the original term that was used for the inter network. 
Inter network includes many types of other networks such as the PAN or personal area network. 
- Gateways were the network elements that were originally used for connecting various networks in predecessor of the internet called the ARPANET. 
Today, these connecting devices are more commonly known as the internet routers. 
- There is a type of interconnection between the various networks at the link layer of the networking model. 
- This layer is particularly known as the hardware centric layer and it lies below the TCP/ IP logical interfaces level. 

Two devices are mainly used in establishing this interconnection:
Ø  Network switches and
Ø  Network bridges
- Even now this cannot be called as inter networking rather, the system is just a single and large sub-network. 
- Further, for traversing these devices no inter networking protocol is required. 
However, it is possible to convert a single network in to an inter network. 
- This can be done by making various segments out of the network and also making logical divisions of the segment traffic using the routers. 
- The internet protocol suite has been particularly designed for providing a packet service. 
- This packet service offered by the IPS is quite unreliable. 
- The elements that maintain a network state and are intermediate in the network are avoided by the architecture. 
- The focus of the architecture is more on the end points of the active communication session.
- For a reliable transfer of the data, a proper transport layer protocol must be used by the applications. 
- One such protocol is the TCP (transmission control protocol) and it is capable of providing a reliable stream for communication. 
- Sometimes a simpler protocol such as the UDP (user datagram protocol) might be used by the applications. 
- The applications using this protocol carry out only those tasks for which reliable data delivery is not required or for which realtime is required. 

Examples of such tasks include voice chat or watching a video online etc. Inter networking uses two architectural models namely:

  1. OSI or the open system interconnection model: This model comes with 7 layer architecture that covers the hardware and the software interface.
  2. TCP/ IP model: The architecture of this model is somewhat loosely defined when compared with the OSI model. 


Saturday, August 31, 2013

What is the difference between leaky bucket algorithm and token bucket algorithm?

- Telecommunications networks and the packet switched computer networks make use of the leaky bucket algorithm for checking the data transmissions. 
This check is carried out in the form of packets. 

About Leaky Bucket Algorithm
- This algorithm is used for determining whether the data transmissions confirm to the limits that have been defined for the burstiness and bandwidth. 
Leaky bucket counters also use the leaky bucket algorithm for detecting the peak or the average rate of the stochastic or random events and processes and if they are exceeding the predefined limits. 
We shall take analogy of a bucket for explaining this algorithm.
Consider a bucket having a hole in its bottom through which the water it has will leak away. 
- The rate of leakage is constant if it is not empty. 
- We can intermittently add water to it that is in short bursts. 
- But if a large amount of water is added to it in one go, the water will exceed the bucket’s capacity and overflow will occur. 
- Hence, it is determined using this leaky bucket algorithm that whether or not adding water to it will make up the average rate or will exceed it. 
- Leak rate sets the average rate of adding the water and depth of the bucket decides the amount of water to be added. 
- Asynchronous transfer mode networks use the generic cell rate algorithm which is one of the versions of the leaky bucket algorithms. 
- At the user network interfaces, these algorithms are used in the usage/ network parameter control. 
- The algorithm is also used in network-network interfaces and inter-network interfaces for protecting networks from the overwhelming traffic levels through the connections in the network. 
- A network interface card can be used on a network using ATM for shaping the transmissions. 
- This network interface card might use an equivalent of the generic cell rate algorithm or this algorithm itself.
The leaky bucket algorithm can be implemented in two different ways both of which are mentioned in the literature. 
- It appears as if there are two distinct algorithms that are together known as the leaky bucket algorithm.

About Token Bucket Algorithm

- At an interval of every 1/r seconds the token bucket algorithm adds a token to a bucket. 
- The maximum number of tokens that can be handled by a bucket are b. 
- Any token above this limit is rejected by the bucket. 
- When the bucket receives a packet from the network layer consisting of n bytes, the n numbers of tokens are taken out from the bucket and then the packet is transmitted in to the network. 
- If number of tokens available is less than n, the packet is treated as being non-conformant. 
- A bucket with a fixed capacity is associated with some virtual user and the rate at which it leaks is fixed. 
- No leakage occurs if there is nothing in the bucket. 
- Some water has to be added to the bucket in order to make the packet conform-ant. 
- No water is added to the bucket if adding this amount of water will cause the bucket to exceed its capacity. 
- Therefore, we can see that one algorithm adds something constantly to the bucket and removes also for conforming packets. 
- The other algorithm removes something constantly and adds something for confirming packets. 
- Both the algorithms are same in effectiveness and this is why the two see each the same packet as non-confirming or confirming. 
- The leaky bucket algorithm is often used as meter. 


Wednesday, August 28, 2013

What are different policies to prevent congestion at different layers?

- Many times it happens that the demand for the resource is more than what network can offer i.e., its capacity. 
- Too much queuing occurs in the networks leading to a great loss of packets. 
When the network is in the state of congestive collapse, its throughput drops down to zero whereas the path delay increases by a great margin. 
- The network can recover from this state by following a congestion control scheme.
- A congestion avoidance scheme enables the network to operate in an environment where the throughput is high and the delay is low. 
- In other words, these schemes prevent a computer network from falling prey to the vicious clutches of the network congestion problem. 
- Recovery mechanism is implemented through congestion and the prevention mechanism is implemented through congestion avoidance. 
The network and the user policies are modeled for the purpose of congestion avoidance. 
- These act like a feedback control system. 

The following are defined as the key components of a general congestion avoidance scheme:
Ø  Congestion detection
Ø  Congestion feedback
Ø  Feedback selector
Ø  Signal filter
Ø  Decision function
Ø  Increase and decrease algorithms

- The problem of congestion control gets more complex when the network is using a connection-less protocol. 
- Avoiding congestion rather than simply controlling it is the main focus. 
- A congestion avoidance scheme is designed after comparing it with a number of other alternative schemes. 
- During the comparison, the algorithm with the right parameter values is selected. 
For doing so few goals have been set with which there is an associated test for verifying whether it is being met by the scheme or not:
Ø  Efficient: If the network is operating at the “knee” point, then it is said to be working efficiently.
Ø  Responsiveness: There is a continuous variation in the configuration and the traffic of the network. Therefore the point for optimal operation also varies continuously.
Ø Minimum oscillation: Only those schemes are preferred that have smaller oscillation amplitude.
Ø Convergence: The scheme should be such that it should bring the network to a point of stable operation for keeping the workload as well as the network configuration stable. The schemes that are able to satisfy this goal are called convergent schemes and the divergent schemes are rejected.
Ø Fairness: This goal aims at providing a fair share of resources to each independent user.
Ø  Robustness: This goal defines the capability of the scheme to work in any random environment. Therefore the schemes that are capable of working only for the deterministic service times are rejected.
Ø  Simplicity: Schemes are accepted in their most simple version.
Ø Low parameter sensitivity: Sensitivity of a scheme is measured with respect to its various parameter values. The scheme which is found to be too much sensitive to a particular parameter, it is rejected.
Ø Information entropy: This goal is about how the feedback information is used. The goal is to get maximum info with the minimum possible feedback.
Ø Dimensionless parameters: A parameter having the dimensions such as the mass, time and the length is taken as a network configuration or speed function. A parameter that has no dimensions has got more applicability.
Ø Configuration independence: The scheme is accepted only if it has been tested for various different configurations.

Congestion avoidance scheme has two main components:
Ø  Network policies: It consists of the following algorithms: feedback filter, feedback selector and congestion detection.
Ø  User policies: It consists of the following algorithms: increase/ decrease algorithm, decision function and signal filter.
These algorithms decide whether the network feedback has to be implemented via packet header field or as source quench messages.




Facebook activity