Subscribe by Email


Showing posts with label Frames. Show all posts
Showing posts with label Frames. Show all posts

Tuesday, September 10, 2013

What are the differences between bridges and repeaters?

Bridges and repeaters are both important devices in the field of telecommunications and computer networking. In this article we discuss about these two and differences between them. 
- The repeaters are deployed at the physical layer whereas one can find bridges at the MAC layer. 
- Thus, we called repeaters as the physical layer device. 
- Similarly, bridge is known as the MAC layer device. 
- Bridge is responsible for storing as well forwarding the data packets in an Ethernet.
- Firstly, it examines the header of the data frame, selects few of them and then forwards them to the destination address mentioned in the frame. 
- Bridge uses the CSMA/CD for accessing a segment whenever the data frame has to be forwarded to it.
- Another characteristic of a bridge is that its operation is transparent. 
- This means that the hosts in the network do not know that the bridge is also present in the network. 
- Bridges learn themselves; they do not have to be configured again and again. 
They can be simply plugged in to the network. 
- Installing a bridge causes formation of LAN segments by breaking a LAN. 
Packets are filtered with the help of bridges. 
- The frames that belong to one LAN segment are not sent to the other segments. 
- This implies separate collision domains are formed. 
The bridge maintains a bridge table consisting of the following entries:
  1. LAN address of the node
  2. Bridge interface
  3. Time stamp
  4. Stale table entries

- Bridges themselves learn that which interface can be used for reaching which host. 
- After receiving a frame, it looks for the location of the sending node and records it.
- It keeps the collision domains isolated from one another thus, giving the maximum throughput. 
- It is capable of connecting a number of nodes and offer limitless geographical coverage. 
- Even different types of Ethernet can be connected through it. 
- Even the repeaters are plug and play devices but they do not provide any traffic isolation. 
- Repeaters are used for the purpose of regenerating the incoming signals as they get attenuated with time and distance. 
- If physical media such as the wifi, Ethernet etc. is being used, the signals can travel only for a limited distance and after that their quality starts degrading. 
The work of the repeaters is to increase the extent of the distance over which the signals can travel till they reach their destination. 
- Repeaters also provide strength to the signals so that their integrity can be maintained. 
- Active hubs are an example of the repeaters and they are often known as the multi-port repeaters. 
- Passive hubs do not serve as repeaters. 
- Another example of the repeaters are the access points in a wifi network. 
- But it is only in repeater mode that they function as repeaters. 
- Regenerating signals using repeaters is a way of overcoming the attenuation which occurs because of the cable loss or the electromagnetic field divergence. 
For long distances, a series of repeaters is often used. 
- Also, the unwanted noise that gets added up with the signal is removed by the repeaters. 
- The repeaters can only perceive and restore the digital signals.
- This is not possible with the analog signals. 
- Signal can be amplified with the help of amplifiers but they have a disadvantage which is that on using the amplifiers, the noise is amplified as well. 
- Digital signals are more prone to dissipation when compared to analog signals since they are completely dependent up on the presence of the voltages. 
- This is why they have to be repeated again and again using repeaters. 


Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Tuesday, July 16, 2013

What are the characteristics of network layer?

- The network layer comes at number three in the OSI model of networking. 
The duty of this layer is to forward and route the packets via the intermediate routers. 
- It comes with functional as well as procedural means for the transfer of data sequences with variable length from a source host to a destination host and across one or more networks. 
- During the transfer it also takes the responsibility for the maintenance of the services functions’ quality. 

There are many other functions of this layer such as:

Ø Connection-less communication: In IP, a datagram can be transmitted from one host to another without any need for the receiving host to send an acknowledgement. Protocols that are connection oriented are used on the higher levels of the OSI model.

Ø  Host addressing: Every host in the network is assigned a unique address that determines its location. A hierarchical system is what that assigns this address. These are the addresses that are known as the IP (internet protocol) addresses.

Ø  Message forwarding: The networks are sometimes divided in to a number of sub – networks which are then connected to other networks for facilitating wide – area communication. Here specialized hosts called routers or gateways are used for forwarding the packets from one host to another.

Characteristics of Network Layer

Encapsulation:
- One of the characteristics of the network layer is encapsulation. 
- Network layer ought to provide encapsulation facilities. 
- It is necessary that the devices must be identified with the addresses. 
- Not only the devices but the network layer PDUs must be assigned such addresses. 
- The layer 4 PDU is supplied to the layer 3 during the process of encapsulation. 
- For creating the layer 3 PDU, a layer 3 label or header is added to it. 
- In reference to the network layer, this PDU thus created is referred to as a packet. 
- On creation of a packet, the address of the receiving host is included in the header. 
- This address is commonly known as the destination address. 
- Apart from this address the address of the source or the sender host is also stored in the header. 
- This address is termed as the source address. 
- Once the encapsulation process is complete, the layer 3 sends this packet to the data link layer for preparing it to be transmitted over the communication media.

Routing: 
- The services provided by the network layer for directing the packets to the destination addresses define this characteristic. 
- It is not necessary that the destination and the source hosts must always be connected to the same network.
- In actual, the packet might have to go through a number of networks before reaching the destination. 
- During this journey the packet has to be guided to reach the proper address. - This is where the routers come in to action. 
- They help in selecting the paths for guiding the packets to the destination. 
This is called routing. 
- During the course of routing of the packet, it may need to traverse a number of devices.
- We call the route taken by the packet to reach one intermediate device as “hop”. 
- The contents of the packet remain intact until the destination host has been reached.


De-capsulation: 
- On the arrival of the packet at the destination address, it is sent for processing at the third layer. 
- The destination address is examined by the host system for verifying whether the packet is meant for itself or not. 
- If the address is found to be correct, the decapsulation process is carried out at the network layer. 
- This layer passes the layer 4 PDU to the transport layer for appropriate servicing. 


Monday, July 15, 2013

What is a virtual circuit? What are the advantages of virtual circuit?

The term VC or virtual circuit is synonymous with the term virtual channel or connection in the field of computer networks as well as in the telecommunications. 
- These are the connection oriented communication service.  
- The packet mode communication is the means through which this service is delivered. 
- After the establishment of a virtual circuit or connection between the two application processes or nodes, the stream of bytes or bit stream can be delivered between the two. 
- The higher level protocols are allowed by the virtual circuits in order to avoid the task of unnecessary dealing with the data division.
- This task may involve dividing data in to frames, packets or segments. 
The virtual circuits bear a resemblance to the circuit switching mode under the fact that both of them are connection oriented.
- This means that both of them require correct order of delivery of the data. 
Also, they both require signaling overhead during the establishment phase of the connections.
- The difference between the two is in the terms of the latency and bit rate. 
- It is constant in circuit switching and may vary in virtual circuits. 

This happens because of the following 3 major causes:
1. Varying length of the packet queues in the nodes.
2. Varying bit rate as generated by the application.
3. Varying load generated by the users who share the same resources on the network through statistical multiplexing. 

- A number of virtual circuit protocols are known for providing reliable services for communication but not all. 
- These services are provided by the means of data re-transmission because of the ARQ (automatic repeat request) and error detection. 
- Data-gram represents an alternate configuration for the virtual circuit. 

There are two types of virtual circuits namely:

Layer 4 virtual circuits: Data-link protocols such as TCP which are connection oriented and include segment numbering and thus reordering on the receiver’s side use this kind of virtual circuits. Thus out of order delivery is prevented. 

Layer 2/3 virtual circuits: The virtual circuit protocols of the data link  layer and the network layer are based up on the packet switching that is connection oriented. this implies that the delivery path of the data is always the same. 

Advantages of Virtual Circuits
There are several advantages of this kind of virtual connections:
1. They support the bandwidth reservation while the connection is being established.  This in turns increases the possibilities of QoS (quality of service). 
2. They produce less overhead. This is because of the fact that there is no individual packet routing and exclusion of the complete addressing info from the packet header. Each packet contains only a small VI or virtual channel identifier. The remaining routing info is provided to the network nodes during the establishment of the connection. 
3. Theoretically speaking, the nodes here have high capacity and are faster because their only task is to carry out routing. On the other hand, the network nodes in a connection-less network carry out routing for every packet individually. In switching, it requires only to look up the VCI in the table instead of analyzing the full address. Implementation of the switches is quite easy in the ASIC hardware but the complexity of the routing increases and demands software implementation. But as we know there is a huge market of the IP routers and layer 3 switching is supported by the advanced IP routers. 

Below mentioned are some protocols that provide VC facility:
- TCP or transmission control protocol
- SCTP or stream control transmission protocol
- X.25
- Frame relay
- ATM or asynchronous transfer mode
- MPLS or multi-protocol label switching


Wednesday, July 10, 2013

Explain the concept of piggybacking?

- Piggybacking is a well known technique used in the transmission of data in the third layer of the OSI model i.e., the network layer. 
- It is employed in making a majority of the frames that are transmitted from receiver to the emitter. 
- It adds to the data frame, the confirmation that the sender sent on successful delivery of data frame. 
- This confirmation is called the ACK or acknowledge signal. 
- Practically, this ACK signal is piggybacked on the data frame rather than sending it individually by some other means. 

Principle behind Piggybacking
- The piggybacking technique should not be confused with the sliding window protocols that are also employed in the OSI model. 
- In piggybacking, an additional field for the ACK or the acknowledgement signal is incorporated in to the data frame itself. 
- There is only a difference of bit between the sliding window protocol and piggybacking.
- Whenever some data has to be sent from party to another, the data will be sent along with the field for ACK. 

The piggybacking data transfer is governed by the following three rules:
Ø  If both the data as well as the acknowledgement have to be sent by the party A, it has to include both the fields in the same frame.
Ø  If only the acknowledgement has to be sent by the party A, then it will have use a separate frame i.e., an ACK for that.
Ø  If only the data has to be by the party A, then the ACK field will be included within the data frame and thus transmitted along with it. This duplicate ACK frame is simply ignored by the receiving party B.

- The only advantage of using this technique is that it helps in improving efficiency. 
- The disadvantage is that is the service can be blocked or jammed by the receiving party if there is no data to be transmitted. 
- Enabling a receiver timeout by means of a counter the moment when the party receives the data frame can solve this problem to a great extent. 
- An ACK control frame will be sent by the receiver if the timeout occurs and still there is no data for transfer. 
- A counter called the emitter timeout is also set up by the sender which if ends without getting any confirmation from the receiver will make the sender assume that the data packet got lost in the way and therefore will have to re-transmitted.

- Piggybacking is also used in accessing the internet.
- It is used in establishment of a wireless internet connection by means of wireless internet access service of the subscriber without taking explicit permission from the subscriber. 
- However, according to the various jurisdiction laws around the world, this practice is under ethical and legal controversy. 
- In some places it is completely regulated or outlawed while at other places it is allowed.  
- A business customer who provides services related to hotspots, as of cafe and hotels, cannot be thought of using piggybacking technique via non – customers. - A number of such locations provide services for a fee. 


Tuesday, July 9, 2013

Explain CSMA with collision detection?

- CSMA with collision detection is abbreviated as CSMA/CD. 
- CSMA in itself makes use of the LBT technology i.e., listen or sense before talk. 
- But when incorporated with the ability of collision detection, it gets much better. 
- If the channel is sensed to be idle the data packets or frames are transmitted immediately but if not, the transmitter is bound to wait for some time before it can re-transmit. 
- Sensing the channels prior to transmission is absolutely necessary if the collisions are to be avoided. 
- Sensing the channel is the most effective way of avoiding the collisions. 
- There are two types of CSMA protocols namely persistent and the non-persistent CSMA.
- In CSMA/CD protocol all the hosts have freedom for transmitting and receiving the data frames on one and the same channel. 
- Also, the size of the packets is variable.

CSMA/CD comprises of two processes:
Carrier Sense: In this process the transmitter or the host checks if the channel or the line is not occupied before starting the transmission.
Collision Detection: CSMA/CD tries to detect the collisions in the shortest possible time. If it happens to detect a collision, it stops the transmission then and there and waits for a random amount of time which is equal to the binary exponential back-off. It then again senses the channel.

- For ensuring there occurs no collision during the transmission of a packet, a host must have the capability of detecting the collision before the transmission process is complete. 
- What happens is that the host A sensing the line to be idle starts transmitting a frame. 
- Just before the first unit of this frame reaches host B, it also senses the line to be idle and starts its transmission. 
- Now the host B receives data while its transmission is still in progress and so it detects that a collision is about to occur. 
- A collision occurs close to the host B. the host A also receives data in midst of its transmission and therefore detects the collision. 
- For making the hosts detect collision before transmission, a minimum length has to be decided for the packets that are transmitted via CSMA/CD networks. 

There are 3 states for a CSMA/ CD channel namely:
  1. Contention
  2. Transmission
  3. Idle
- Ethernet is the most popular example of the CSMA/CD networks. 
- A minimum slot time is required for collision detection between the stations.
This slot time must equal twice the maximum value of the propagation delay. - The host acquires the channel on the basis of the 1 – persistence. 
- Also, a jam signal is transmitted if a case of collision detection occurs. 
- CSMA/CD make use of the binary exponential back-off algorithm. 
- It is obvious that the idle time of the channel will be small if the load is heavy. 
- It normalizes all the packets with respect to the time of the packet transmission.
- CSMA/CD represents a very effective method for media access control. 
There are different methods available for detecting the collisions. 
- Which method is to be followed depends largely on the transmission medium that exists between the two stations. 
- For example, if the two stations are connected via electrical buses, the collision can be detected by making comparison between the transmitted and the received data. 
- Some other way involves recognition of a signal of higher amplitude than the normal one. 
- The jam signal used in the CSMA/CD networks is constituted of 32 bit binary pattern.



Sunday, July 7, 2013

Differentiate between persistent and non-persistent CSMA?

- CSMA or Carrier Sense Multiple Access makes use of LBT or listen before technique before making any transmission. 
- It senses the channel for its status and if found free or idle, the data frames are transmitted otherwise the transmission is deferred till the channel becomes idle again. 
- In simple words, we can say that CSMA is an analogy to human behavior of not interrupting others when busy. 
- There are number of protocols out which the persistent and the non – persistent are the major ones. 
- CSMA is based on the idea that if the state of the channel can be listened or sensed prior to transmitting a packet, better throughput can be achieved.
- Also, using this methodology a number of collisions can be avoided. 
- However, it is necessary to make the following assumptions in CSMA technology:
  1. The length of the packets is constant.
  2. The errors can only be caused by collisions except which there are no errors.
  3. Capture effect is absent.
  4. The transmissions made by all the other hosts can be sensed by each of the hosts.
  5. The transmission time is always greater than the propagation delay.
About Persistent CSMA
- This protocol first senses the transmission channel and acts accordingly. 
- If the channel is found to be occupied by some other transmission, it keeps listening or sensing the channel and as soon as the channel becomes free or idle, starts its transmission. 
- On the other hand, if the channel is found empty, then it does not wait and starts transmitting immediately. 
- There are possibilities of collisions. 
- If one occurs, the transmitter must wait for random time duration and start again with the transmission. 
- It has a type called 1 – persistent protocol which makes transmission of probability 1 whenever the channel is idle. 
- In persistent CSMA there are possibilities of occurrence of collisions even if the propagation delay is 0. 
- However, collisions can only be avoided if the stations do not act so greedy. 
We can say that this CSMA protocol is aggressive and selfish. 
- There is another type of this protocol called the P – persistent CSMA. 
This is the most optimal strategy. 
- Here the channels are assumed to be slotted where one slot equals the period of contention i.e., 1 RTT delay. 
- The protocol has been named so because it transmits the packet with probability p if the channel is idle otherwise it waits for one slot and then transmits.

About Non–Persistent CSMA
- It is deferential and less aggressive when compared to its persistent counterpart. 
- It senses the channel and if it is busy it just waits and then again after sometime senses the channel unlike persistent CSMA which keeps on sensing the channel continuously. 
- As and when the channel is found free, the data packet is transmitted immediately. 
- If there occurs a collision it waits and starts again.
- In this protocol, even if the two stations become greedy in midst of transmission of some other station they do not collide probably whereas, in persistent CSMA they collide.
- Also, if only one of the stations become greedy in midst of some other transmission in progress, it has no choice but to wait. 
- In persistent CSMA this greedy stations takes over the channel up on completion of the current transmission.
Using non – persistent CSMA can reduce the number of collisions whereas persistent CSMA only increases the risk. 
- But the non – persistent CSMA is less efficient when compared to the persistent CSMA.
- Efficiency lies in the ability of the protocols of detecting the collisions before starting the transmission. 


Wednesday, July 3, 2013

What are five key assumptions in dynamic channel allocation?

Putting the available bandwidth in operation of the cellular telephone system to efficient use is an important problem to be considered for providing good service to the largest number of customers possible. The problem has gained a critical status owing to the rapid growth of the cellular telephones users. 

- A communication channel is nothing but a band of frequencies which a number of users can use simultaneously if they are residing far apart from each other. 
- There is a minimum distance at which no interference occurs between the users and it is known as the channel reuse constraint. 
- A cellular telephone system divides the service area in to a number of regions commonly known as the cells. 
- Each of the cells has its own base station for handling the calls concerned with that cell. 
- The bandwidth of the communication channel is partitioned in to many channels permanently. 
- The cells are then allocated these channels in such a way that the channel reuse constraint is not violated by the calls. 
- There are a number of ways for allocating the channels. 
- Few of them are better than the others when it comes to reliably making channels available to all the cells. 

Few examples of channel allocation methods are:
  1. Fixed assignment method
  2. Dynamic allocation method
  3. Reinforcement learning method
About Dynamic Method Allocation
- One type of dynamic method allocation is the BDCL or the borrowing with directional channel locking. 
- Out of all the above mentioned channel allocation methods, the dynamic allocation is considered the best one according to some studies conducted. 
- It is somewhat of the heuristic kind. 
- In dynamic allocation, the channels are allocated in the same way as in the fixed assignment method but it permits borrowing channels from the other cells whenever required. 
- It then arranges those channels in a specific order in each of the cells and this ordering is used in determining the channels for borrowing and reassigning the calls dynamically within the cells.
- There are static allocation techniques also but those don’t seem to work as well as the dynamic allocation techniques. 

In dynamic channel allocation 5 assumptions are always made which we have discussed below:

Station model: 
- There are N independent stations in the model and one frame is generated by each of the stations one at a time. 
- It is blocked until the successful transmission of the previous frame. 
- This means a station cannot queue multiple frames for transmission. 
- For example, a transmission gap of 100 bits is required during the transmission of the consecutive frames.

Single channel assumption:  
- The same medium is shared by all the stations. 
- Through it all the stations can receive and transmit.

Collision assumption: 
- A collision occurs whenever at the same time two frames are transmitted. 
The two frames that collide have to be re-transmitted.

Transmission model: 
- There are 2 types namely, the continuous time model and the slotted time model. 
- In the former type transmission can be started at any given time. 
- In the latter model, transmission starts with a time slot.

Carrier sense: 
- It can also be classified in to 2 categories namely carrier sense and no carrier sense. 
- Stations can know if a channel is occupied prior to using it. This is called carrier sense.
- In no carrier sense, the stations cannot know whether the channel is occupied or not before transmission.

- Also, it gets difficult for the dynamic allocation method for setting up the favorable usage patterns as the calls start saturating the system. 


Friday, June 28, 2013

Give advantages of frame relay over a leased phone line?

Frame relay and leased phone lines are two of the physical connection media for setting up the connections. 

Advantages of Frame Relay over Leased Phone Line
- Frame relay is a kind of the standardized WAN (wide area network) technology for specifying the logical link as well as physical link layers of the digital telecommunication channels. 
- It is done by the means of a packet switching methodology.
- The frame relay technology has been designed for transportation across the ISDN (integrated services digital network) infrastructure. 
- Today, it is used in the context of a number of network interfaces. 
- Frame relays are commonly implemented for VoFR (voice over frame relay).  - It is used as an encapsulation technique for the data. 
- The frame relays are used between the WANs and the LANS.
- A private line or a leased line is provided to the user that connects to the frame relay node. 
- The frequently changing path is transparent to the WAN protocols used extensively by the end users. 
- Data is transmitted via these networks and the frame relay network handles all this.
- One advantage of the frame relays over the leased lines is that they are less expensive and this is what that makes the frame relays so popular in the telecommunications industry.
- Another advantage of the frame relays over the leased lines that make them popular is that they have user equipment that can be configured with extreme simplicity in the frame relay network. 
- The usage of the Ethernet over the fiber optics communication is high. 
- This has led to using the frame relay protocol and encapsulation by the dedicated broadband services like DSL and cable modem, VPN, MPLS etc. 
- However, there are a number of rural regions in India where there is still an absence of the cable modem and DSL services.
- In such areas, the only option for the non-dial-up connection is the frame relay line of 64 Kbit/ s.
- Thus, it might be used by some retail chain to connect with the WAN of their corporate. 
- The aim of the designers of the frame relay is to offer a telecommunication service for transmitting the cost efficient data between the various end points in the WAN and the local area networks in an intermittent traffic. 
- The data is put in to units of variable sizes called the frames by the frame relay process. 
- The required error correction process is left up to the end points. 
- This error correction includes re-transmission of the data. 
- This increases the speed of the overall transmission of data. 
- A PVC or the permanent virtual circuit is provided by the network so that when a customer looks at a dedicated connection and not having to pay for leased line that is full time engaged. 
- The route by which each frame travels to the destined end point is figured out by the service provider and thus he decides the charges based up on the usage. 
- A level of the service quality can be selected by the enterprise. 
- The frames can be prioritized while the importance of the other frames is reduced. 
- The frame relay can run on systems such as the following:
Ø  Fractional T – 1
Ø  Full T – carrier
Ø  E – 1
Ø  Full E carrier
- A frame relay provides mid-range services between ATM (asynchronous transfer mode) and the ISDN operating at a speed of 128 Kbps. 
- Not only it provides the services, it also complements them. 
- The base of the frame relay technology is provided by the X.25 packet switching that has been designed for data transmission over the analog voice lines.



Facebook activity