Subscribe by Email


Showing posts with label Source. Show all posts
Showing posts with label Source. Show all posts

Thursday, September 26, 2013

Differentiate between upward and downward multiplexing?

The process of multiplexing is carried out at the transport layer. Several conversations are multiplexed in to one connection or physical links or virtual circuit. For example, suppose the host has only one network address available for use. Then it has to be used by all the transport connections originating at that host. For multiplexing the following two main strategies are followed:
Ø  Upward multiplexing and
Ø  Downward multiplexing

Upward Multiplexing 
- In upward multiplexing, the different transport connections are multiplexed in to one network connection. 
- These transport connections are grouped by the transport layer as per their destinations. 
- It then maps the groups with the minimum number of network connections possible.
- The upward multiplexing is quite useful where the network connections come very expensive.

Downward Multiplexing 
- It is only used when the connections with high bandwidth are required. 
- In case of the downward multiplexing, the multiple network connections are opened by the transport layer and the traffic is distributed among them. 
- But for using downward multiplexing, it is necessary that this capacity must be handled well by the subnet’s data links.

Another Technique 
- In either of the cases it is not guaranteed that the segments will be delivered in order. 
- Therefore, another technique is adopted. 
- The segments are numbered sequentially. 
- Each octet is numbered by the TCP sequentially. 
- Segments are then numbered based up on the number of the first octet present in that segment. 
- The segments might get damaged in the transition or some may even fail to arrive at the destination. 
- This failure is not acknowledged by the transmitter. 
- However, the successful receipt of the segment is does acknowledged by the receiver. 
- Sometimes, the cumulative acknowledgements might be used. 
- If the ACK triggers a time out interrupt, the re-transmission of the segment is done. 
- Also the re-transmission is done when an ACK is lost. 
- The receiver must have the ability to recognize the duplicate ACKs. 
- If such thing occurs, the receiver assumes by itself the ACK might have been lost.
- This happens when the ACK duplicate is received before the connection is closed. 
- If the duplicate is received after the closure of the connection, the situation is dealt differently. 
- In this case, the sender and receiver are allowed to know about each other’s existence. 
- They negotiate about the parameters and the transport entity resources are allocated based up on some mutual agreement. 
The connection release is of two types:

Ø Asymmetric release: 
This is the one used in the telephone systems. However it does not works well for the network that use packet switching.

Ø  Symmetric release: 
- This is certainly better than the previous one.
- Here, all the directions are released independently with respect to each other. 
- The host continues receiving data after the disconnection TPDU has been sent. 
- But the symmetric release has another problem which is related with indirection levels and fake messages. 
- There are no proper solutions for this problem in case of the unreliable communication media. 
- Note that this has nothing to do with the protocol. 
- Putting a reliable protocol over an unreliable medium can actually guarantee the delivery of the message. 
- Another thing to be noted is that it the time limit within which the message will be delivered cannot be guaranteed by any protocol. 
- Error conditions might prolong the delivery period. 
- Restarting the connections can lead to the loss of all the state info and the connection might remain as half-open. 
- Since no protocol has been designed to deal with this problem therefore one has to go forward with the risks associated with releasing the connections. 


Friday, September 20, 2013

Differentiate between transparent and nontransparent fragmentation?

A number of problems are encountered because of the size of the data packets. There is no ability in the data link layer by means of which it could handle these problems and so the bridges also don’t work here. 
The Ethernet also experiences a number of problems because of the following:
Ø  Different way in which the maximum packet size is defined.
Ø  Maximum packet size that can be handled by a router.
Ø  The maximum length slot that are used for transmission
Ø  Errors due to the packet length
Ø  Standards

The data packets can be fragmented in two ways namely:
  1. Transparent and
  2. Non – transparent
Both these ways can be followed based on a network by network basis. We can also say that no such end – to – end agreement exists based up on which it can be decided which process is to be used.

Transparent Fragmentation: 
- This type of fragmentation is followed when a packet is split in to smaller fragments by a router.
- These fragments are sent to the next router which does just the opposite i.e., it reassembles the fragments and combine them to form original packet. 
- Here, the next network does not come to know whether any fragmentation has taken place. 
- Transparency is maintained between the small packet networks when compared to the other subsequent networks.
- For example, transparent fragmentation is used by the ATM networks by means of some special hardware. 
- There are some issues with this type of fragmentation. 
- It puts some burden on the performance of the network since all the fragments have to be transmitted through the same gateway. 
- Also, sometimes the repeated fragmentation and reassembling has to be done for small packet network in series. 
- Whenever an over-sized packet reaches a router, it is broken up in to small fragments. 
- These fragments are transported to the next exit router. 
- The fragments are assembled by this exit router which then forwards them to the next router.
- Awareness regarding this fragmentation is not maintained for the subsequent networks. 
- For a single packet fragmentation is done many times before the destination is finally reached. 
- This of course consumes a lot of time because the repeated fragmentation and assembling has to be carried out. 
- Sometimes, it also presents the reason of corrupting the packet’s integrity.

Non-Transparent Fragmentation: 
- In this type, the packet is split in to fragments by one router. 
- But the difference is that these fragments are not reassembled until the fragments reach their destination. 
- They remain split till then. 
- Since in this type of fragmentation the fragments are assembled only at the destination host, the fragments can be routed independent of each other. 
- This type of fragmentation also experiences some problems such as header has to be carried by each of the fragments till they reach their destination. 
Numbering has to be done for all the fragments so that no problem is experienced in reconstructing the data stream.


Whichever type of fragmentation we use, one thing has to be made sure which is that later we should be able to form the original packets using the fragments. This insists on having some type of labeling for the fragments. 

Segmentation is another name for the fragmentation. A packet is injected in to the data link layer by the IP layer but it is not responsible for reliable transmission of the packets. Some maximum value on the size of the packets is imposed by each layer for their reasons. For a large packet that travels through the network for which the MTU is small, fragmentation is very much needed. 


Thursday, September 19, 2013

What is fragmentation?

- The fragmentation technique is implemented in the IP (internet protocol) for breaking down the datagrams into smaller pieces. 
- This is done so that it becomes easy for the data packets to be passed through the link with a datagram size smaller than that of the original MTU or the maximum transmission unit. 
- The procedure for the IP fragmentation along with the procedures for reassembling and transmitting the datagrams is given in the RFC 791. 
- For determining the optimal MTU path, the IPv6 hosts are needed so that the packets can be sent. 
- If in case the PDU i.e., the protocol data unit received by the router is larger than the MTU of the next hop, then there are two options are available if IPv4 transport is being used:
Ø Dropping the PDU and sending an ICMP (internet control message protocol) message indicating that the condition packet is quite big.
Ø  Fragmenting the IP packet and then transmitting it over the link whose MTU is smaller. Any IPv6 packet with a size less than or equal to 1280 bytes can be delivered without having the need for using the IPv6 fragmentation.

- If a fragmented IP packet is received by the recipient host, its job is to reassemble the datagram and then send it over to the protocols at the higher layers. 
- The purpose of reassembling is expected to take place at the recipient’s host side but for some practical reasons it might be done by some intermediate router. 
- For example, the fragments might be reassembled by the NAT (network address translation) for translating the data streams. 
- Excessive re-transmission can result as a consequence of the IP fragmentation whenever packet loss might be encountered by the fragments. 
It is required for all the reliable protocols (example, TCP) for re-transmitting the fragments in their correct order for recovering from the single fragment loss. 
Thus, typically two approaches are used by the senders for determining datagrams of what size should be transmitted over the network:
  1. First approach: The sender must transmit an IP datagram of size as same as that of the first hop’s MTU.
  2. Second approach: Running the path MTU discovery algorithm.

- Fragmentation does leave an impact on the network forwarding. 
- When there are multiple parallel paths for the internet router the traffic is split by the technologies such as the CEF and LAG throughout the links via some hash algorithms. 
- The major goal of this algorithm is to make sure that all the packets with the same flow are transmitted out on the same path for the minimization of the not so required packet reordering. 
- If the TCP or UDP port numbers are used by the hash algorithm, the fragmented packets might be forwarded through different paths. 
- This is so because the layer 4 information is contained only in the first fragment of the packet. 
- As a result of this, usually the initial fragment arrives after the non-initial fragments. 
- This condition is often treated as an error by most of the security devices in the hosts.  
- Therefore, they drop these packets.
- The fragmentation mechanism differs in IPv4 and IPv6. 
- In the former, the fragmentation is performed by the router. 
- On the other hand, in IPv6 fragments that are larger than MTU are dropped by the routers.
- Also, in both the cases there is a variation in the header format. 
- Since fragmentation is carried out using analogous fields, therefore the algorithm can be used again and again for the purpose of fragmentation and reassembling. 
- A best effort should be made by the IPv4 hosts for reassembling the datagram fragments. 


Wednesday, September 18, 2013

What are the advantages and disadvantages of datagram approach?

- Today’s packet switching networks make use of a basic transfer unit commonly known as the datagram. 
- In such packet switched networks, the order of the data packets arrival, time of arrival and delivery comes with no guarantee. 
- The first packet switching network to use the datagrams was CYCLADES. 
Datagrams are known by different names at different levels of the OSI model. 
- For example, at layer 1 we call it Chip, at layer 2 it is called Frame or cell, data packet at layer 3 and data segment at layer 4. 
- The major characteristic of a datagram is that it is independent i.e., it does not rely on any other thing for the information required for exchange.
- The duration of a connection between any two points is not fixed such as in telephone conversations. 
- Virtual circuits are just the opposite of the datagrams. 
- Thus, a datagram can be called as a self containing entity. 
- It consists of information sufficient for routing it from the source to the destination without depending up on the exchanges made earlier. 
- Often, a comparison is drawn between the mail delivery service and the datagram service. 
- The user’s work is to just provide the address of the destination. 
- But he/she is not guaranteed the delivery of the datagram and if the datagram is successfully delivered, no confirmation is sent to the user. 
- The data gram are routed to some destination without help of a predetermined path. 
- The order in which the data has to be sent or received is given no consideration. 
- It is because of this that the datagrams belonging to a single group might travel over different routes before they reach their common destination. 

Advantages of Datagram Approach
  1. Datagrams can contain the full destination address rather than using some number.
  2. There is no set up phase required for the datagram circuits. This means that no resources are consumed.
  3. If it happens during a transmission that one router goes down, the datagrams that will suffer will include only those routers which would have been queued up in that specific router. The other datagrams will not suffer.
  4. If any fault or loss occurs on a communication line, the datagrams circuits are capable of compensating for it.
  5. Datagrams play an important role in the balancing of the traffic in the subnet. This is so because halfway the router can be changed.
Disadvantages of Datagram Approach

  1. Since the datagrams consist of the full destination address, they generate more overhead and thus lead to wastage of the bandwidth. This in turn makes using datagram approach quite costly.
  2. A complicated procedure has to be followed for datagram circuits for determining the destination of the packet.
  3. In a subnet using the datagram approach, it is very difficult to keep congestion problems at bay.
  4. The any-to-any communication is one of the key disadvantages of the datagram subnets. This means that if a system can communicate with any device, any of the devices can communicate with this system. This can lead to various security issues.
  5. Datagram subnets are prone to losing or re - sequencing the data packets during the transition. This puts a great burden on the end systems for monitoring, recovering, and reordering the packets as they were originally.
  6. Datagram subnets have less capability of dealing with congestion control as well as flow control. This happens because the direction of the incoming traffic is not specified. In the virtual circuit subnets, the flow of the packets is directed only along the virtual circuits thus making it comparatively easy for controlling it.
  7. The unpredictable nature of the flow of the traffic makes it difficult to design the datagram networks


Wednesday, September 11, 2013

What are transport and application gateways?

- Hosts and routers are separated in TCP/IP architecture. 
- For private networks, more protection is required to maintain an access control over it. 
- Firewall is one of the components of this TCP/IP architecture. 
- Internet is separated from Intranet by this firewall.
- This means all the incoming traffic must pass through this firewall. 
- The traffic that is authorized is allowed to pass through. 
- It is not possible penetrate the firewall simply. 
Firewall has two components namely:
Ø  Filtering router and
Ø  Two types of gateways namely application and transport gateways.
- All the packets are checked by the router and filtered based up on any of the attributes such as protocol type, port numbers, and TCP header and so on. 
Designing the rules for filtering of the packets is quite a complex task. 
- A little protection is offered by this packet filtering since with the filtering rules on one side, it is difficult to cater to the services of the users on other side.

About Application Gateways
- Application layer gateways consist of 7 layer intermediate system designed mainly for the access control. 
- However, these gateways are not commonly used in the TCP/ IP architecture. 
- These gateways might be used sometimes for solving some inter-networking issues. 
- The application gateways follow a proxy principle for supporting the authentication, restrictions on access controls, encryption and so on. 
- Consider two users A and B. 
- A generates an HTTP request which is first sent to the application layer gateway rather than being send to its destination. 
- The gateway checks about the authorization of this request and performs encryption. 
- After the request has been authorized, it is sent to user B from the gateway just at it would have been sent by A.
- B responds back with a MIME header and data which might be de-crypted or rejected by the gateway.
- If the gateway accepts, it is sent to A as if from B. 
- These gateways are designed for all the protocols of application level.


About Transport Gateways
- The working of the transport gateway is similar to application gateway but it works at the TCP connection level. 
- These gateways are not dependent up on the application code but they do need client software so as to maintain awareness about the gateway. 
Transport gateways are intermediate systems at layer 4. 
- An example is the SOCKS gateways. 
- IETF has defined it as a standard transport gateway.
- Again, consider two clients A and B. 
- A TCP connection is opened by A to the gateway. 
- The SOCKS server port is nothing but the destination port. 
- A sends a request to this port for opening the connection to B indicating the port number of the destination. 
- After checking the request, the request for connection from A is either accepted or rejected. 
- If accepted, a new connection is opened to B. 
- The server also informs A that the connection has been established successfully. 
- The data relay between the clients is kept transparent. 
- But in actual there are two TCP connections having their own sequence numbers as well as acknowledgements. 
- The transport gateways are simpler when compared with the application layer gateways. 
- This is so because the transport gateways are not concerned with the data units at the application layer. 
- It has to act on the packets simply once the connection has been established. 
Also, this is the reason why it also gives higher performance in comparison with the application layer gateways. 
- But it is important that the client must be aware of its presence since there is no transparency here. 
- If between the two networks the only border existing is the application gateway, it alone can act as the firewall. 


What are multi-protocol routers?

- There are routers that have the capability to route a number of protocols at the same time. 
- These routers are popularly known as the multi-protocol routers. 
- There are situations in networking where combinations of various protocols such as the appletalk, IP, IPX etc. are used. 
- In such situations normal typical router cannot help. This is where we use the multi-protocol routers. 
- Using the multi-protocol routers, information can be shared between the networks. 
- The multi-protocol router maintains an individual routing table for each of the protocols.
- The multi-protocol routers have to be used carefully since they cause an increase in the number of routing tables that are present on the network. 
- Each protocol is advertised individually by the router. 

A multiprotocol router consists of the following information:
Ø  Routing information protocol (RIP)
Ø  Boot protocol relay agent (BOOTP)
Ø  RIP for IPX
- The multi-protocol routers use this routing information protocol for performing dynamic exchange of the routing info. 
- Routers using RIP protocol can dynamically exchange information with the other routers that use the same protocol. 
- The BOOTP agent is included so that the DHCP requests can be forwarded to their respective servers residing on other subnets. 
- It is because of this, a single DHCP server can process a number of IP subnets. 
- Multi-protocol routers do not require to be manually configured.
- The networking world these days relies totally up on the internet protocol. But there are certain situations where certain tasks can be performed more efficiently by the other protocols. 
- Most of the network protocols share many similarities rather than being different. 
- Therefore, if one protocol can be routed by a protocol efficient, then it is obvious that it can route the other one also efficiently. 
- If we route the non-IP protocols in a network, this implies that the same staff that takes care of the IP monitoring is administering the non-IP routing also. 
This reduces the need for more equipment and effort. 
- There are a number of non-IP protocols available using which a LAN can work more effectively. 
- Using a number of non-IP protocols, a network can be made very flexible and easier to meet the demands of its users. 
- All these points speak in the favor of multi-protocol routing in an abstract way. 
- But the non-IP protocols to be routed must be selected with care. 

Below we mention reasons why routing non – IP protocols can be avoided:

  1. It requires additional knowledge because you cannot master everything. For individual protocol an expert is required who in case of a failure can diagnose it and fix it.
  2. It puts extra load on the routers. For every protocol, the router would have to maintain a separate routing table. This calls for a dynamic routing protocol for the router itself. For all this, more memory is required along with high processing power.
  3. It increases the complexity. Multi-protocol router even though it seems to be simple, it is quite a complicated thing in terms of both hardware and software. Any problem in the implementation of the protocol can have a negative impact up on the stability of all the protocols.
  4. Difficulty in designing: There are separate rules for routing of each protocol, assignment of the addresses and so on. There are possibilities that there might be conflicts between these rules which means it is very difficult to design.
  5. It decreases stability. Scaling capacity of certain protocols is not as good as of the others. Some of the protocols are not suited to work in a WAN environment. 


Wednesday, September 4, 2013

What is a choke packet?

- The networks often experience problems with congestion and flow of the traffic. 
- While implementing flow control a special type of packet is used throughout the network. 
- This packet is known as the choke packet. 
- The congestion in the network is detected by the router when it measures the percentage of the buffers that are actually being used. 
- It also measures the utilization of the lines and average length of the queues. 
When the congestion is detected, the router transmits choke packets throughout the network. 
- These choke packets are meant for the data sources that are spread across the network and which have an association with the problem of congestion. 
These data sources in turn respond by cutting down on the amount of the data that they are transmitting. 
A choke packet has been found to be very useful in the maintenance tasks of the network. 
- It also helps in maintaining the quality to some extent. 
- In both of these tasks, it is used for informing the specific transmitters or the nodes that the traffic they are sending is resulting in congestion in the network. 
Thus, the transmitters or the nodes are forced to decrease the rate at which they are generating traffic. 
- The main purpose of the choke packets is controlling the congestion and maintaining flow control throughout the network. 
- The router directly addresses the source node, thus causing it to cut down its data transmission rate. 
- This is acknowledged by the source node by making reductions by some percentage in the transmission rates. 
- An example of the choke packet commonly used by the most of the routers is the source quench packet by ICMP (internet control message protocol).  
- The technique of using the choke packets for congestion control and recovery of the network involves the use of the routers. 
- The whole network is continuously monitored over by the routers for any abnormal activity.
- Factors such as the space in the buffers, queue lengths and the line utilization are checked by the routers. 
- In case the congestion occurs in the network, the choke packets are sent by the routers to the corresponding parts of the network instructing them to reduce the throughput. 
- The node that is the source of the congestion has to reduce its throughput rate by a certain percentage that depends on the size of the buffer, bandwidth that is available and the extent of the congestion. 
- Sending the choke packets is the way of routers telling the nodes to slow down so that the traffic can be fairly distributed over the nodes. 
- The advantage of using this technique is that it is dynamic in nature. 
The source node might send as much data as required while the network might inform that it is sending large amounts of traffic.
- The disadvantage is that it is difficult to know by what factor the node should reduce its throughput.
- The amount of the congestion being caused by this node and the capacity of the region in which congestion has occurred is responsible for deciding this. 
- In practical, this information is not instantly available. 
- Another disadvantage is that after the node has received the choke packet, it should be capable of rejecting the other choke packets for some time. 
- This is so because many additional choke packets might be generated during the transmission of the other packets. 

The question is for how long the node is supposed to ignore these packets? 
- This depends up on some dynamic factors such as the delay time. 
- Not all congestion problems are same, they vary over the network depending up on its topology and number of nodes it has. 


Tuesday, September 3, 2013

What is meant by load shedding?

The network is monitored by the network monitoring systems. These systems need to be robust and must be capable of inevitably coping with the situations in which the overload occurs. The network gets overloaded because of the nodes generating large volumes of data at high rates. Overload might also occur because of the burstiness of the traffic in its normal course of operation. For reducing the load of the network, load shedding techniques are applied. 

- Load shedding techniques have to be followed if the network is under a lot of stress. 
- This has to be done while monitoring the network for avoiding the packet loss that otherwise might be uncontrollable. 
- Load shedding involves sampling the incoming traffic. 
- CoMo or continuous monitoring has been developed to serve this purpose. 
- It uses such a load shedding scheme which can infer the query’s cost using the relation between the set of features of the traffic and the actual resource usage without having any knowledge of the plug-ins. 
- Here, traffic feature can be defined as a counter describing the incoming traffic’s particular property. 
The property might be any of the following:
Ø  Number of packets
Ø  Number of bytes
Ø  Flows
Ø  Unique IP destination address and so on.


- The CoMo consists of a prediction and the load shedding sub-system for intercepting the packets prior to sending them to the plug-in from the filter.
- A traffic query is implemented by this plug-in. 
- The system completes the process in 4 phases. 
- In the first phase, it forms a batch of packets for each 100ms of the traffic. - It then processes each of these batches for extracting a predefined traffic features’ set that is quite large. 
- From these, the most relevant sets are selected by the feature selection sub-system based up on the present stats of the CPU usage of the query. 
- The selected subset is then supplied as input for the “multiple linear regression subsystem”. 
- This is done for the prediction of the CPU cycles that the query requires for processing the whole batch. 
- If the prediction is greater than the capacity of the system, the batch is pre-processed by the load shedding subsystem for discarding the packet’s portion. 
The batch is discarded through packet or flow sampling. 

Load shedding is now being seen as an effective method for curbing the overload situations even in the real time systems. 
- It involves shedding excess of the load in such a way that the stability of the system is not disturbed and also the system buffers do not experience any overflows. 
- The idea for applying the technique of load shedding in the field of networking has been adopted from the concept of the electric power management.
- Here, the electric current is intentionally disconnected on particular lines when the demands for the power supply are higher than what is being supplied.
- CoMo is an open source system and can be quickly implemented and can be used for further deploying other network monitoring applications. 
- The system has been written using C language and uses a feature rich API. 
The system works by predicting the CPU usage of the system and thus anticipates about the resource requirements bursts that might occur in future. 
- The load shedding scheme used by the CoMo has the capability of automatically identifying the features using which the resource usage can be best modeled for each monitoring application.
This identification is made according to the previous resource usage measurements. 
- These measurements are then used for determining the system’s overall load and by what percentage the load must be shed. 


Facebook activity