Subscribe by Email


Showing posts with label Sender. Show all posts
Showing posts with label Sender. Show all posts

Friday, October 4, 2013

What is a substitution cipher method?

There are two classic methods for cryptography namely transposition cipher method and the substitution cipher method. In this article we shall discuss about the latter one i.e., the substitution cipher method. 
- This method of encoding involves replacement of the units or letters of the plain text with some other units or letters. 
- The encoded text is then called as the cipher text. 
- The replacement of the units is made based up on some regular system. 
These units might be individual letters, pairs or triplets of letters and so on. 
On the receiver’s side, an inverse substitution is required for deciphering the text. 
- We can make a comparison between the transposition ciphers and the substitution ciphers. 
- In the former ciphers, the plain text units are rearranged unlike in substitution cipher where units are replaced.
- The order of rearrangement in the transposition ciphers is somewhat more complex than what is followed by the substitution ciphers and the units are not changed.
- On the other side, the sequence of the units remains same in the substitution cipher but they are themselves altered. 

There are various types of substitution cipher as mentioned below:

Ø  Simple substitution ciphers: 
- This involves substitution of the single letters and thus has been termed as the simple substitution. 
- The alphabet can be written out in some order so as to represent the substitution.
- This alphabet is referred to as the substitution alphabet. 
- This alphabet might be revered or shifted or scrambled in some complex manner. 
- In such cases, it is termed as the deranged alphabet or the mixed alphabet. 
The creation of the mixed alphabets involves writing out a keyword while removing the repeating letters and then rewriting the leftovers in the same sequence. 
- For avoiding the transmission errors, the cipher text is written in block form and the spaces and the punctuation are omitted. 
- This also helps in creating disguises for the boundaries between the words.

Ø Homophonic substitution: 
- This method is followed for increasing the difficulty for the frequency analysis attacks. 
- The frequencies of the letters of the plain text are disguised by homophony. 
Here the letters of the plain text are mapped to many symbols of the cipher text. 
- Normally the plain text symbols with highest frequencies are mapped with more equivalents when compared to their low frequency counterparts. 
- This leads to the flattening of the frequency distribution which in turn raises the difficulty of frequency analysis. 
- For the invention of larger alphabets a number of solutions are employed. 
The simplest of these solutions is using a numeric substitution alphabet. 
- Another method uses the variations of the existing alphabet i.e., writing it upside down, or in upper case and lower case etc. 
Nomenclature is also a variant of the homophonic substitution. 
- The other two types of homophonic ciphers namely straddling checker board and book cipher.

Ø Polyalphabetic substitution: 
- It involves the use of the multiple cipher alphabets. 
- For the facilitation of the encryption process, these alphabets are written out in a big table which is referred to as the tableau. 
- The particular poly alphabetic cipher is defined by the method with which the tableau is filled and the alphabet is chosen. 
- Some types of the polyalphabetic ciphers are:
             1. Beaufort cipher
             2. Gronsfeld cipher
             3. Running key cipher
             4. Autokey cipher

Ø  Polygraphic substitution: 
Here the letters of the plain text are substituted in terms of large groups instead of individual letter substitution.

Ø Mechanical substitution ciphers: 
Some examples of this type of substitution ciphers are enigma, rotor cipher machines etc.

Ø The one-time pad: 
This one is a special substitution cipher which has been proven that it is unbreakable mathematically.



Thursday, September 26, 2013

Differentiate between upward and downward multiplexing?

The process of multiplexing is carried out at the transport layer. Several conversations are multiplexed in to one connection or physical links or virtual circuit. For example, suppose the host has only one network address available for use. Then it has to be used by all the transport connections originating at that host. For multiplexing the following two main strategies are followed:
Ø  Upward multiplexing and
Ø  Downward multiplexing

Upward Multiplexing 
- In upward multiplexing, the different transport connections are multiplexed in to one network connection. 
- These transport connections are grouped by the transport layer as per their destinations. 
- It then maps the groups with the minimum number of network connections possible.
- The upward multiplexing is quite useful where the network connections come very expensive.

Downward Multiplexing 
- It is only used when the connections with high bandwidth are required. 
- In case of the downward multiplexing, the multiple network connections are opened by the transport layer and the traffic is distributed among them. 
- But for using downward multiplexing, it is necessary that this capacity must be handled well by the subnet’s data links.

Another Technique 
- In either of the cases it is not guaranteed that the segments will be delivered in order. 
- Therefore, another technique is adopted. 
- The segments are numbered sequentially. 
- Each octet is numbered by the TCP sequentially. 
- Segments are then numbered based up on the number of the first octet present in that segment. 
- The segments might get damaged in the transition or some may even fail to arrive at the destination. 
- This failure is not acknowledged by the transmitter. 
- However, the successful receipt of the segment is does acknowledged by the receiver. 
- Sometimes, the cumulative acknowledgements might be used. 
- If the ACK triggers a time out interrupt, the re-transmission of the segment is done. 
- Also the re-transmission is done when an ACK is lost. 
- The receiver must have the ability to recognize the duplicate ACKs. 
- If such thing occurs, the receiver assumes by itself the ACK might have been lost.
- This happens when the ACK duplicate is received before the connection is closed. 
- If the duplicate is received after the closure of the connection, the situation is dealt differently. 
- In this case, the sender and receiver are allowed to know about each other’s existence. 
- They negotiate about the parameters and the transport entity resources are allocated based up on some mutual agreement. 
The connection release is of two types:

Ø Asymmetric release: 
This is the one used in the telephone systems. However it does not works well for the network that use packet switching.

Ø  Symmetric release: 
- This is certainly better than the previous one.
- Here, all the directions are released independently with respect to each other. 
- The host continues receiving data after the disconnection TPDU has been sent. 
- But the symmetric release has another problem which is related with indirection levels and fake messages. 
- There are no proper solutions for this problem in case of the unreliable communication media. 
- Note that this has nothing to do with the protocol. 
- Putting a reliable protocol over an unreliable medium can actually guarantee the delivery of the message. 
- Another thing to be noted is that it the time limit within which the message will be delivered cannot be guaranteed by any protocol. 
- Error conditions might prolong the delivery period. 
- Restarting the connections can lead to the loss of all the state info and the connection might remain as half-open. 
- Since no protocol has been designed to deal with this problem therefore one has to go forward with the risks associated with releasing the connections. 


Thursday, September 19, 2013

What is fragmentation?

- The fragmentation technique is implemented in the IP (internet protocol) for breaking down the datagrams into smaller pieces. 
- This is done so that it becomes easy for the data packets to be passed through the link with a datagram size smaller than that of the original MTU or the maximum transmission unit. 
- The procedure for the IP fragmentation along with the procedures for reassembling and transmitting the datagrams is given in the RFC 791. 
- For determining the optimal MTU path, the IPv6 hosts are needed so that the packets can be sent. 
- If in case the PDU i.e., the protocol data unit received by the router is larger than the MTU of the next hop, then there are two options are available if IPv4 transport is being used:
Ø Dropping the PDU and sending an ICMP (internet control message protocol) message indicating that the condition packet is quite big.
Ø  Fragmenting the IP packet and then transmitting it over the link whose MTU is smaller. Any IPv6 packet with a size less than or equal to 1280 bytes can be delivered without having the need for using the IPv6 fragmentation.

- If a fragmented IP packet is received by the recipient host, its job is to reassemble the datagram and then send it over to the protocols at the higher layers. 
- The purpose of reassembling is expected to take place at the recipient’s host side but for some practical reasons it might be done by some intermediate router. 
- For example, the fragments might be reassembled by the NAT (network address translation) for translating the data streams. 
- Excessive re-transmission can result as a consequence of the IP fragmentation whenever packet loss might be encountered by the fragments. 
It is required for all the reliable protocols (example, TCP) for re-transmitting the fragments in their correct order for recovering from the single fragment loss. 
Thus, typically two approaches are used by the senders for determining datagrams of what size should be transmitted over the network:
  1. First approach: The sender must transmit an IP datagram of size as same as that of the first hop’s MTU.
  2. Second approach: Running the path MTU discovery algorithm.

- Fragmentation does leave an impact on the network forwarding. 
- When there are multiple parallel paths for the internet router the traffic is split by the technologies such as the CEF and LAG throughout the links via some hash algorithms. 
- The major goal of this algorithm is to make sure that all the packets with the same flow are transmitted out on the same path for the minimization of the not so required packet reordering. 
- If the TCP or UDP port numbers are used by the hash algorithm, the fragmented packets might be forwarded through different paths. 
- This is so because the layer 4 information is contained only in the first fragment of the packet. 
- As a result of this, usually the initial fragment arrives after the non-initial fragments. 
- This condition is often treated as an error by most of the security devices in the hosts.  
- Therefore, they drop these packets.
- The fragmentation mechanism differs in IPv4 and IPv6. 
- In the former, the fragmentation is performed by the router. 
- On the other hand, in IPv6 fragments that are larger than MTU are dropped by the routers.
- Also, in both the cases there is a variation in the header format. 
- Since fragmentation is carried out using analogous fields, therefore the algorithm can be used again and again for the purpose of fragmentation and reassembling. 
- A best effort should be made by the IPv4 hosts for reassembling the datagram fragments. 


Monday, August 26, 2013

What is the difference between congestion control and flow control?

Flow control and congestion control are similar sounding concepts and often confuse us sometimes. In this article we shall discuss about the differences between these two. 

- Computer networks use the flow control mechanism for keeping control over the data flow between two nodes in such a way that the receiver if it is slower when compared to the sender is not outrun by it. 
- The mechanism of flow control also provides ways to the receiver to maintain control over the speed with which it transmits the information.
- On the other side, the congestion control provides mechanism for the controlling the data flow under the condition of actual congestive collapse. 
- The mechanism keeps a control over the entry of data in to the network so that this traffic can be handled by the network effectively.  
- The mechanism of flow control does not let the receiving node get overwhelmed by the traffic that is being sent by another node. 

There are several reasons why this flow of data gets out of control and affects the network negatively. 
- First reason being that the receiving node might not be capable of processing the incoming data as fast as it is being sent by the sender node. 
Based on these reasons there are various types of flow control mechanisms available. 
- However, the most common categorization is based on the fact whether the feedback is being sent to the sender or not. 
- There is another flow control mechanism called the open loop flow control mechanism. 
- In this mechanism no feedback is sent to the sender by the receiver and this perhaps the most widely used flow control mechanism. 
- Opposite of open loop flow control mechanism is the closed loop flow control. 
- In this mechanism, the receiver sends back congestion information to the sender. 
- Other commonly used flow control mechanisms are:
Ø  Network congestion
Ø  Windowing flow control
Ø  Data buffer etc.

- Congestion control offers such methods that can be used for regulating the incoming traffic in the network to such an extent where the network itself can manage all that.
- In congestion control, the network is prevented from falling in to a state of congestive collapse. 
- In such a state either little or no communication happens.
- This little communication is of no help. 
- Switching networks usually require congestion control measures than any other type of networks. 
- The congestion control is driven by the goal of keeping the number of data packets at such a level that the performance of the network would be reduced dramatically.
- Congestion control mechanism can be seen even in protocols such as UDP (user datagram protocol), TCP (transport control protocol) and other transport layer protocols. 
- TCP makes use of the exponential back off and slow start algorithms. 
- We classify the congestion control algorithms based up on the feedback that is given by the network, the performance aspect that has to be improved, and modifications that have to be made for the present network, fairness criterion that is being used and so on. 

- Congestion and flow control are two very important mechanisms used for keeping the traffic flow in order. 
- Flow control is a mechanism that stretches from one end to another i.e., between the sender and the receiver where the speed of sender is much higher than that of the receiving node. 
- Congestion control is implemented for preventing packet loss as well as delay that is caused as a side effect of the network congestion. 
- Congestion is meant for controlling the traffic of the entire whereas flow control is limited to transmission between two nodes.


Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Saturday, August 24, 2013

Explain multicast routing?

- Multicast routing is also known as the IP multicast. 
- For sending the IP (internet protocol) data-grams to a group of receivers who are interested in receiving the data-grams, multicast routing is used.
- The data-grams are sent to all the receivers in just one transmission. 
Multicast routing has got a special use in the applications that require media streaming on private networks as well as internet. 
- Multicast routing is IP specific version. 
- A more general version is the multicast networking.
- Here, the multicast address blocks are especially reserved in IPv6 and IPv4. 
Broadcast addressing has been replaced by multicast addressing in IPv6. 
Broadcast addressing was used in IPv4. 
- RFC 1112 describes the multicast routing and in 1986 it was standardized. 

This technique is used for the following types of real – time communication over the IP infrastructure of the network:
Ø  Many – to – many
Ø  One – to – many

- It scales up to receiving population that is large enough and it does not require either knowledge regarding the receivers and the identity of the receivers. 
- Network infrastructure is used efficiently by the multicast efficiently and requires source sending packet to a large number of receivers only once. 
- The responsibility of the replication of the packet is of the nodes which are nothing but the routers and the network switches.
- The packet has to be replicated till it reaches the multiple receivers. 
- Also, it is important that the message is sent only once over the link.   
- UDP or the user data gram protocol is the mostly used protocol of low level. 
Even though if this protocol does not guarantees reliability i.e., the packets might get delivered or get lost. 
- There are other multicast protocols available that are reliable such as the PGM or the pragmatic general multicast. 

It has been developed for adding the following two things a top the IP multicast:
Ø  Retransmission and
Ø  Loss detection
The following 3 things are key elements of an IP multicast:
  1. Receiver driven tree creation
  2. Multicast distribution tree
  3. IP multicast group address
- The receivers and the sources use the last for sending as well as receiving the multicast messages. 
- The group address serves as the destination address of the data packets for the sources whereas it is used for informing the network whether or not the receivers want those packets.
- Receivers need a protocol for joining a group. 
- One most commonly used protocol for this purpose is the IGMP i.e., the internet group management protocol. 
- The multicast distribution trees are set up using this protocol. 
- Once a group has been joined by the receiver, the PIM (protocol independent multicast) protocol is used for constructing a multicast distribution tree for this group. 
- The multicast distribution trees set up with the help of this protocol are used for sending the multicast packets to the members of the multicast group. 

PIM can be implemented in any of the following variations:
  1. SM or sparse mode
  2. DM or dense mode
  3. SSM or source specified mode
  4. SDM or sparse – dense mode or bidirectional mode (bidir)

- Since 2006, the sparse mode is the most commonly used mode. 
- The last two variations are more scalable and simpler variations of PIM and are also popular. 
- An active source is not required for carrying out an IP multicast operation and knowing about the group’s receivers. 
- The receiver drives the construction of the IP multicast tree. 
- The network nodes which lie closer to receiver are responsible for initiating this construction.
- This multicast then scales to a receiver population that is large enough. 
- It is important for a multicast router to know which all multicast trees can be reached in the network. 
- Rather, it only requires knowledge of its downstream receivers. 
- This is how the multicast – addressed services can be scaled up. 


Saturday, August 17, 2013

What is reverse path forwarding?

- RPF or reverse path forwarding is a common technique used for ensuring that the multicast packets are forwarded without any loops in the modern routers in multicast routing. 
- This technique is also used for the prevention of the IP address spoofing during the unicast routing.
- Multicast RPF or just RPF is not used alone. 
- Rather, it is used along with some multicast routing protocol. 
- There are various multicast routing protocols such as the PIM – SM, PIM – DM, MSDP and so on. 
- This is for ensuring that no loops are formed in forwarding the multicast packets. 
- Source address is used for deciding whether the traffic has to be forwarded or not in multicast routing. 
- On the other hand in unicast routing, this depends up on the destination address instead of source address. 
- This it achieves either through utilization of either the unicast routing table of the router or a multicast routing table that has been dedicated to the purpose. 
As and when a packet comes to the interface of the router, it searches in the networks list for the networks that can be reached through this interface. 
- This is nothing but the reverse path checking of the multicast packet.  
- If the appropriate routing entry is found for the multicast packet’s source IP address, it is said to pass the RPF check. 
- After this the packet is sent to all the participating interfaces in that particular multicast group.  
- If the packet fails at this RPF check, the packet is simply dropped. 
- Because of this, the packet forwarding has to be decided depending up on its reverse path. 
- Otherwise, the forward path can be used as usual. 
- Only those packets are forwarded by the RPF routers which pass this RPF check. 
- Passing this RPF check means breaking any loop that might otherwise exist. 
This is of critical importance in the multicast topologies that are redundant. 
This is so because it is possible for the same packet to come again and again to the same router through a number of multiple interfaces. 
- The RPF check is an integral part of the decision concerning forwarding of the packets. 
- Consider a router forwarding a packet from first interface to the second interface and also from second interface to the first one. 
- Thus, the same packet is received by the two packets, thus creating a common routing loop. 
- This loop will keep on forwarding the packets until the expiry of their TTLs. 
Even if the TTL expiry is considered, the best thing to do is to avoid the routing loops because they are a main cause of the temporary network degradation.

RPF check has the following underlying assumptions:
  1. The given unicast routing table is converged as well as correct.
  2. There is symmetry between the path that goes from sender to router and the path that comes back from the router to the sender.
- RPF check uses the unicast routing table as the fallback. 
- Therefore, if the first assumption is not satisfied, the check will fail. 
- But in case the second assumption is false, the multicast traffic is rejected by the RPF check save the traffic on the shortest path that exists between the sender and the router. 
- This results in a multicast tree that is non–optimal.
- The reverse path forwarding will not work if there are uni-directional links present in the network.


Unicast RPF: 
- This type of the reverse path forwarding is based up on the concept that the interface which does not originate traffic must not accept it. 
- It is good for the organizations to not allow private address propagation on their network until and unless they are continuously using it. 


Facebook activity